Test Report: Docker_Linux_crio 19664

                    
                      b0eadc949d6b6708e1f550519f8385f72d7fe4f5:2024-09-19:36285
                    
                

Test fail (15/327)

x
+
TestAddons/parallel/Registry (73.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.204858ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00335759s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003107777s
addons_test.go:342: (dbg) Run:  kubectl --context addons-685250 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-685250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-685250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.092164142s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-685250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-685250
helpers_test.go:235: (dbg) docker inspect addons-685250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf",
	        "Created": "2024-09-19T18:39:26.544485958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 762128,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:26.653035442Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hosts",
	        "LogPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf-json.log",
	        "Name": "/addons-685250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-685250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-685250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-685250",
	                "Source": "/var/lib/docker/volumes/addons-685250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-685250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-685250",
	                "name.minikube.sigs.k8s.io": "addons-685250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1b0ccece079b2c012374acf46f9c349cae0c8bd9ae1a208e2d0acc049d21c7cb",
	            "SandboxKey": "/var/run/docker/netns/1b0ccece079b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-685250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3c159902c31cb41244d3423728e25a3f29e7e8e24a95c6da692d29e053f66798",
	                    "EndpointID": "51640df6c09057e35d4d5a9f04688e387f2981906971ee1afa85b24730ac60a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-685250",
	                        "cdadbc576653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685250 -n addons-685250
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 logs -n 25: (1.266404417s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-845536                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-759185                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | download-docker-985684                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-985684                                                                   | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | binary-mirror-515604                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32895                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-515604                                                                     | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-685250 --wait=true                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-685250 ssh cat                                                                       | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | /opt/local-path-provisioner/pvc-83c31ed0-fc42-4249-94b0-a7e77464cc71_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-685250 ip                                                                            | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC |                     |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:03.200212  761388 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:03.200467  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200476  761388 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:03.200481  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200718  761388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 18:39:03.201426  761388 out.go:352] Setting JSON to false
	I0919 18:39:03.202398  761388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12093,"bootTime":1726759050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:03.202515  761388 start.go:139] virtualization: kvm guest
	I0919 18:39:03.204903  761388 out.go:177] * [addons-685250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:39:03.206237  761388 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:39:03.206258  761388 notify.go:220] Checking for updates...
	I0919 18:39:03.208919  761388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:03.210261  761388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:03.211535  761388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:39:03.212802  761388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:39:03.213964  761388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:39:03.215359  761388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:03.237406  761388 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:03.237534  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.283495  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.274719559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.283600  761388 docker.go:318] overlay module found
	I0919 18:39:03.286271  761388 out.go:177] * Using the docker driver based on user configuration
	I0919 18:39:03.287521  761388 start.go:297] selected driver: docker
	I0919 18:39:03.287534  761388 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:03.287545  761388 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:39:03.288361  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.333412  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.324780201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.333593  761388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:03.333839  761388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:39:03.335585  761388 out.go:177] * Using Docker driver with root privileges
	I0919 18:39:03.336930  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:03.336986  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:03.336997  761388 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:03.337090  761388 start.go:340] cluster config:
	{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:03.338526  761388 out.go:177] * Starting "addons-685250" primary control-plane node in "addons-685250" cluster
	I0919 18:39:03.339809  761388 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:03.340995  761388 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:03.342026  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:03.342057  761388 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:03.342055  761388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:03.342063  761388 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:03.342182  761388 preload.go:172] Found /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:39:03.342194  761388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:39:03.342520  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:03.342542  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json: {Name:mk74efcccadcff6ea4a0787d2832be4be3984d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:03.359223  761388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:03.359412  761388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:03.359431  761388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:03.359435  761388 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:03.359442  761388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:03.359450  761388 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:14.708408  761388 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:14.708455  761388 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:14.708519  761388 start.go:360] acquireMachinesLock for addons-685250: {Name:mk56c74bc959dec1fb8992b737e0e35c0cd4ad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:14.708642  761388 start.go:364] duration metric: took 84.107µs to acquireMachinesLock for "addons-685250"
	I0919 18:39:14.708671  761388 start.go:93] Provisioning new machine with config: &{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:14.708780  761388 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:14.710766  761388 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:14.711013  761388 start.go:159] libmachine.API.Create for "addons-685250" (driver="docker")
	I0919 18:39:14.711068  761388 client.go:168] LocalClient.Create starting
	I0919 18:39:14.711150  761388 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem
	I0919 18:39:14.824308  761388 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem
	I0919 18:39:15.025789  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:15.041206  761388 cli_runner.go:211] docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:15.041292  761388 network_create.go:284] running [docker network inspect addons-685250] to gather additional debugging logs...
	I0919 18:39:15.041313  761388 cli_runner.go:164] Run: docker network inspect addons-685250
	W0919 18:39:15.056441  761388 cli_runner.go:211] docker network inspect addons-685250 returned with exit code 1
	I0919 18:39:15.056478  761388 network_create.go:287] error running [docker network inspect addons-685250]: docker network inspect addons-685250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-685250 not found
	I0919 18:39:15.056490  761388 network_create.go:289] output of [docker network inspect addons-685250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-685250 not found
	
	** /stderr **
	I0919 18:39:15.056606  761388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:15.072776  761388 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001446920}
	I0919 18:39:15.072824  761388 network_create.go:124] attempt to create docker network addons-685250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:15.072890  761388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-685250 addons-685250
	I0919 18:39:15.132522  761388 network_create.go:108] docker network addons-685250 192.168.49.0/24 created
	I0919 18:39:15.132554  761388 kic.go:121] calculated static IP "192.168.49.2" for the "addons-685250" container
	I0919 18:39:15.132644  761388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:15.147671  761388 cli_runner.go:164] Run: docker volume create addons-685250 --label name.minikube.sigs.k8s.io=addons-685250 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:15.163961  761388 oci.go:103] Successfully created a docker volume addons-685250
	I0919 18:39:15.164048  761388 cli_runner.go:164] Run: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:22.072772  761388 cli_runner.go:217] Completed: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (6.908674607s)
	I0919 18:39:22.072803  761388 oci.go:107] Successfully prepared a docker volume addons-685250
	I0919 18:39:22.072836  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:22.072868  761388 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:22.072944  761388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:26.483616  761388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41062526s)
	I0919 18:39:26.483649  761388 kic.go:203] duration metric: took 4.410778812s to extract preloaded images to volume ...
	W0919 18:39:26.483780  761388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:26.483868  761388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:26.529192  761388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-685250 --name addons-685250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-685250 --network addons-685250 --ip 192.168.49.2 --volume addons-685250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:26.802037  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Running}}
	I0919 18:39:26.820911  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:26.839572  761388 cli_runner.go:164] Run: docker exec addons-685250 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:26.880131  761388 oci.go:144] the created container "addons-685250" has a running status.
	I0919 18:39:26.880165  761388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa...
	I0919 18:39:27.339670  761388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:27.361758  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.379045  761388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:27.379068  761388 kic_runner.go:114] Args: [docker exec --privileged addons-685250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:27.421090  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.437982  761388 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:27.438079  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.456233  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.456524  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.456542  761388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:27.594819  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.594862  761388 ubuntu.go:169] provisioning hostname "addons-685250"
	I0919 18:39:27.594952  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.613368  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.613592  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.613622  761388 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685250 && echo "addons-685250" | sudo tee /etc/hostname
	I0919 18:39:27.754187  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.754262  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.771895  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.772132  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.772152  761388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:27.903239  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:27.903269  761388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-753213/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-753213/.minikube}
	I0919 18:39:27.903324  761388 ubuntu.go:177] setting up certificates
	I0919 18:39:27.903341  761388 provision.go:84] configureAuth start
	I0919 18:39:27.903404  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:27.919357  761388 provision.go:143] copyHostCerts
	I0919 18:39:27.919427  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/key.pem (1679 bytes)
	I0919 18:39:27.919543  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/ca.pem (1082 bytes)
	I0919 18:39:27.919618  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/cert.pem (1123 bytes)
	I0919 18:39:27.919681  761388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem org=jenkins.addons-685250 san=[127.0.0.1 192.168.49.2 addons-685250 localhost minikube]
	I0919 18:39:28.160212  761388 provision.go:177] copyRemoteCerts
	I0919 18:39:28.160283  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:28.160320  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.177005  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.271718  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:39:28.293331  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:28.314500  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:28.335572  761388 provision.go:87] duration metric: took 432.21249ms to configureAuth
	I0919 18:39:28.335604  761388 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:28.335790  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:28.335896  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.352244  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:28.352438  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:28.352454  761388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:39:28.570762  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:39:28.570788  761388 machine.go:96] duration metric: took 1.132783666s to provisionDockerMachine
	I0919 18:39:28.570801  761388 client.go:171] duration metric: took 13.859723313s to LocalClient.Create
	I0919 18:39:28.570823  761388 start.go:167] duration metric: took 13.859810827s to libmachine.API.Create "addons-685250"
	I0919 18:39:28.570832  761388 start.go:293] postStartSetup for "addons-685250" (driver="docker")
	I0919 18:39:28.570846  761388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:28.570928  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:28.570969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.587920  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.684315  761388 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:28.687444  761388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:28.687482  761388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:28.687493  761388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:28.687502  761388 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:28.687516  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/addons for local assets ...
	I0919 18:39:28.687596  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/files for local assets ...
	I0919 18:39:28.687629  761388 start.go:296] duration metric: took 116.788714ms for postStartSetup
	I0919 18:39:28.687939  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.704801  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:28.705071  761388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:28.705124  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.721672  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.816217  761388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:28.820354  761388 start.go:128] duration metric: took 14.111556683s to createHost
	I0919 18:39:28.820377  761388 start.go:83] releasing machines lock for "addons-685250", held for 14.111720986s
	I0919 18:39:28.820433  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.837043  761388 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:28.837093  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.837137  761388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:28.837212  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.853306  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.853640  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:29.015641  761388 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:29.019690  761388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:39:29.156274  761388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:29.160605  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.178821  761388 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:29.178900  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.204313  761388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:29.204337  761388 start.go:495] detecting cgroup driver to use...
	I0919 18:39:29.204370  761388 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:29.204409  761388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:39:29.218099  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:39:29.228094  761388 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:39:29.228158  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:39:29.240433  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:39:29.253142  761388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:39:29.326278  761388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:39:29.406802  761388 docker.go:233] disabling docker service ...
	I0919 18:39:29.406859  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:39:29.424951  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:39:29.435168  761388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:39:29.514566  761388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:39:29.591355  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:39:29.601869  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:29.616535  761388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:39:29.616600  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.625293  761388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:39:29.625347  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.634150  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.642705  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.651092  761388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:29.659117  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.667830  761388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.681755  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.690617  761388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:29.698112  761388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:29.705724  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:29.785529  761388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:39:29.878210  761388 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:39:29.878295  761388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:39:29.881824  761388 start.go:563] Will wait 60s for crictl version
	I0919 18:39:29.881889  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:39:29.884918  761388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:29.918116  761388 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:39:29.918200  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.952309  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.988286  761388 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:39:29.989606  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:30.005833  761388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:30.009469  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.020164  761388 kubeadm.go:883] updating cluster {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:30.020281  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:30.020325  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.083858  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.083879  761388 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:39:30.083926  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.116167  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.116190  761388 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:30.116199  761388 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:39:30.116364  761388 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-685250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:30.116428  761388 ssh_runner.go:195] Run: crio config
	I0919 18:39:30.156650  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:30.156675  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:30.156688  761388 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:30.156711  761388 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685250 NodeName:addons-685250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:30.156845  761388 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:30.156908  761388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:30.165387  761388 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:30.165448  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:30.173207  761388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:39:30.188946  761388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:30.205638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:39:30.222877  761388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:30.226085  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.236096  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:30.319405  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:30.332104  761388 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250 for IP: 192.168.49.2
	I0919 18:39:30.332125  761388 certs.go:194] generating shared ca certs ...
	I0919 18:39:30.332140  761388 certs.go:226] acquiring lock for ca certs: {Name:mkac4e621bd7a8886df3f6838bd34b99172c371a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.332275  761388 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key
	I0919 18:39:30.528690  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt ...
	I0919 18:39:30.528724  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt: {Name:mked4ee6d8831516d03c840d59935532e3f21cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.528941  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key ...
	I0919 18:39:30.528958  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key: {Name:mkcb02ba3f86d66b352caba2841d6dd380f76edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.529067  761388 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key
	I0919 18:39:30.624034  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt ...
	I0919 18:39:30.624068  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt: {Name:mkaa7904f1d229a9140b6f62d1d672cf00a2f2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624277  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key ...
	I0919 18:39:30.624295  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key: {Name:mkb6bb0d0409e9bd1f254506994f2a2447e5cc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624398  761388 certs.go:256] generating profile certs ...
	I0919 18:39:30.624464  761388 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key
	I0919 18:39:30.624490  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt with IP's: []
	I0919 18:39:30.752151  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt ...
	I0919 18:39:30.752185  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: {Name:mk69a3ec8793b5371f583f88b2bebacea2af07ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752390  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key ...
	I0919 18:39:30.752406  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key: {Name:mk7d143fc1d3dd645310e55acf6f951beafc9848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752506  761388 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966
	I0919 18:39:30.752526  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:30.915660  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 ...
	I0919 18:39:30.915697  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966: {Name:mkdb41eb017de5d424bda2067b62b8ceafaf07c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.915911  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 ...
	I0919 18:39:30.915931  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966: {Name:mkbc3d5e5a7473c69994a57b2f0a8b8707ffe9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.916041  761388 certs.go:381] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt
	I0919 18:39:30.916130  761388 certs.go:385] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key
	I0919 18:39:30.916176  761388 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key
	I0919 18:39:30.916195  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt with IP's: []
	I0919 18:39:31.094514  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt ...
	I0919 18:39:31.094599  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt: {Name:mk9dc2f777ee8d63ffc9f5a10453c45f6382bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094776  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key ...
	I0919 18:39:31.094791  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key: {Name:mk32678ed11fe18054a48114b5283e466fb989c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094999  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:31.095055  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:39:31.095092  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:31.095124  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem (1679 bytes)
	I0919 18:39:31.095878  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:31.120600  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:31.142506  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:31.164187  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:31.185942  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:31.207396  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:31.229449  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:31.250877  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:39:31.272098  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:31.293403  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:31.308896  761388 ssh_runner.go:195] Run: openssl version
	I0919 18:39:31.314017  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:31.322554  761388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325634  761388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325693  761388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.331892  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:31.340220  761388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:31.343178  761388 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:31.343230  761388 kubeadm.go:392] StartCluster: {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:31.343328  761388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:39:31.343377  761388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:39:31.376569  761388 cri.go:89] found id: ""
	I0919 18:39:31.376645  761388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:31.384955  761388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:31.393013  761388 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:31.393065  761388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:31.400980  761388 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:31.400998  761388 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:31.401035  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:31.408813  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:31.408861  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:31.416662  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:31.424342  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:31.424386  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:31.431658  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.438947  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:31.438996  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.445986  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:31.453391  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:31.453444  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:31.460734  761388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:31.495835  761388 kubeadm.go:310] W0919 18:39:31.495183    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.496393  761388 kubeadm.go:310] W0919 18:39:31.495823    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.513844  761388 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0919 18:39:31.563421  761388 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:39:40.033093  761388 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:40.033184  761388 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:40.033278  761388 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:40.033324  761388 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0919 18:39:40.033356  761388 kubeadm.go:310] OS: Linux
	I0919 18:39:40.033398  761388 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:40.033437  761388 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:40.033482  761388 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:40.033521  761388 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:40.033566  761388 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:40.033607  761388 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:40.033655  761388 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:40.033699  761388 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:40.033736  761388 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:40.033793  761388 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:40.033891  761388 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:40.034008  761388 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:40.034100  761388 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:40.035787  761388 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:40.035950  761388 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:40.036208  761388 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:40.036312  761388 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:40.036391  761388 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:40.036476  761388 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:40.036548  761388 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:40.036641  761388 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:40.036746  761388 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.036794  761388 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:40.036940  761388 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.037024  761388 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:40.037075  761388 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:40.037112  761388 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:40.037161  761388 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:40.037201  761388 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:40.037258  761388 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:40.037338  761388 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:40.037448  761388 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:40.037533  761388 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:40.037626  761388 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:40.037718  761388 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:40.039316  761388 out.go:235]   - Booting up control plane ...
	I0919 18:39:40.039415  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:40.039524  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:40.039619  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:40.039728  761388 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:40.039841  761388 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:40.039909  761388 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:40.040093  761388 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:40.040237  761388 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:40.040290  761388 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.645723ms
	I0919 18:39:40.040356  761388 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:39:40.040404  761388 kubeadm.go:310] [api-check] The API server is healthy after 4.502008624s
	I0919 18:39:40.040492  761388 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:39:40.040605  761388 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:39:40.040687  761388 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:39:40.040875  761388 kubeadm.go:310] [mark-control-plane] Marking the node addons-685250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:39:40.040960  761388 kubeadm.go:310] [bootstrap-token] Using token: ijm4ly.86nu9uivdcvgfqko
	I0919 18:39:40.042478  761388 out.go:235]   - Configuring RBAC rules ...
	I0919 18:39:40.042563  761388 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:39:40.042634  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:39:40.042751  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:39:40.042898  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:39:40.043013  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:39:40.043111  761388 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:39:40.043261  761388 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:39:40.043324  761388 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:39:40.043388  761388 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:39:40.043398  761388 kubeadm.go:310] 
	I0919 18:39:40.043485  761388 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:39:40.043499  761388 kubeadm.go:310] 
	I0919 18:39:40.043591  761388 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:39:40.043599  761388 kubeadm.go:310] 
	I0919 18:39:40.043634  761388 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:39:40.043719  761388 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:39:40.043765  761388 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:39:40.043770  761388 kubeadm.go:310] 
	I0919 18:39:40.043812  761388 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:39:40.043817  761388 kubeadm.go:310] 
	I0919 18:39:40.043857  761388 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:39:40.043862  761388 kubeadm.go:310] 
	I0919 18:39:40.043902  761388 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:39:40.043999  761388 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:39:40.044089  761388 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:39:40.044096  761388 kubeadm.go:310] 
	I0919 18:39:40.044175  761388 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:39:40.044258  761388 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:39:40.044266  761388 kubeadm.go:310] 
	I0919 18:39:40.044382  761388 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044505  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 \
	I0919 18:39:40.044525  761388 kubeadm.go:310] 	--control-plane 
	I0919 18:39:40.044531  761388 kubeadm.go:310] 
	I0919 18:39:40.044599  761388 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:39:40.044606  761388 kubeadm.go:310] 
	I0919 18:39:40.044684  761388 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044851  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 
	I0919 18:39:40.044867  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:40.044876  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:40.046449  761388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:39:40.047787  761388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:39:40.051623  761388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:39:40.051638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:39:40.069179  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:39:40.264712  761388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:39:40.264794  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.264800  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685250 minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-685250 minikube.k8s.io/primary=true
	I0919 18:39:40.272124  761388 ops.go:34] apiserver oom_adj: -16
	I0919 18:39:40.450150  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.950813  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.450429  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.950463  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.450542  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.950992  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.451199  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.950242  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:44.012691  761388 kubeadm.go:1113] duration metric: took 3.747963897s to wait for elevateKubeSystemPrivileges
	I0919 18:39:44.012729  761388 kubeadm.go:394] duration metric: took 12.669506054s to StartCluster
	I0919 18:39:44.012758  761388 settings.go:142] acquiring lock: {Name:mkba96297ae0a710684a3a2a45be357ed7205f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.012903  761388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:44.013318  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/kubeconfig: {Name:mk7bd3287a61595c1c20478c3038a77f636ffaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.013536  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:39:44.013566  761388 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:44.013636  761388 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:39:44.013758  761388 addons.go:69] Setting yakd=true in profile "addons-685250"
	I0919 18:39:44.013778  761388 addons.go:69] Setting helm-tiller=true in profile "addons-685250"
	I0919 18:39:44.013797  761388 addons.go:69] Setting registry=true in profile "addons-685250"
	I0919 18:39:44.013801  761388 addons.go:69] Setting ingress=true in profile "addons-685250"
	I0919 18:39:44.013794  761388 addons.go:69] Setting metrics-server=true in profile "addons-685250"
	I0919 18:39:44.013782  761388 addons.go:234] Setting addon yakd=true in "addons-685250"
	I0919 18:39:44.013816  761388 addons.go:234] Setting addon ingress=true in "addons-685250"
	I0919 18:39:44.013818  761388 addons.go:69] Setting storage-provisioner=true in profile "addons-685250"
	I0919 18:39:44.013824  761388 addons.go:234] Setting addon metrics-server=true in "addons-685250"
	I0919 18:39:44.013824  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013835  761388 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685250"
	I0919 18:39:44.013850  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013852  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685250"
	I0919 18:39:44.013855  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013828  761388 addons.go:234] Setting addon storage-provisioner=true in "addons-685250"
	I0919 18:39:44.013859  761388 addons.go:69] Setting ingress-dns=true in profile "addons-685250"
	I0919 18:39:44.013875  761388 addons.go:69] Setting inspektor-gadget=true in profile "addons-685250"
	I0919 18:39:44.013891  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013904  761388 addons.go:69] Setting default-storageclass=true in profile "addons-685250"
	I0919 18:39:44.013905  761388 addons.go:69] Setting gcp-auth=true in profile "addons-685250"
	I0919 18:39:44.013920  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685250"
	I0919 18:39:44.013928  761388 mustload.go:65] Loading cluster: addons-685250
	I0919 18:39:44.013810  761388 addons.go:234] Setting addon helm-tiller=true in "addons-685250"
	I0919 18:39:44.013987  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014106  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013760  761388 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685250"
	I0919 18:39:44.014180  761388 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:44.014213  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014234  761388 addons.go:69] Setting volcano=true in profile "addons-685250"
	I0919 18:39:44.014289  761388 addons.go:234] Setting addon volcano=true in "addons-685250"
	I0919 18:39:44.014321  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014369  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014420  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014444  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014529  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014668  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014766  761388 addons.go:69] Setting volumesnapshots=true in profile "addons-685250"
	I0919 18:39:44.014784  761388 addons.go:234] Setting addon volumesnapshots=true in "addons-685250"
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014811  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014813  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013790  761388 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685250"
	I0919 18:39:44.014885  761388 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685250"
	I0919 18:39:44.014921  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013892  761388 addons.go:234] Setting addon ingress-dns=true in "addons-685250"
	I0919 18:39:44.015381  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.015478  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013782  761388 addons.go:69] Setting cloud-spanner=true in profile "addons-685250"
	I0919 18:39:44.015604  761388 addons.go:234] Setting addon cloud-spanner=true in "addons-685250"
	I0919 18:39:44.015632  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013894  761388 addons.go:234] Setting addon inspektor-gadget=true in "addons-685250"
	I0919 18:39:44.015698  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.016016  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016089  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.015481  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016191  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013861  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.017759  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.020298  761388 out.go:177] * Verifying Kubernetes components...
	I0919 18:39:44.015297  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013811  761388 addons.go:234] Setting addon registry=true in "addons-685250"
	I0919 18:39:44.026436  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.028211  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:44.037105  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.048567  761388 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:39:44.048657  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.050374  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:39:44.050397  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:39:44.050461  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.052343  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:39:44.060733  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.062707  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.062730  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:39:44.062789  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.081544  761388 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:39:44.081631  761388 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:39:44.083278  761388 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.083339  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:39:44.083408  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.086304  761388 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:39:44.086735  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:39:44.088743  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:39:44.088872  761388 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:39:44.091114  761388 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685250"
	I0919 18:39:44.091164  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.091489  761388 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:39:44.091508  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:39:44.091564  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.091649  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.091952  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.092800  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:39:44.092818  761388 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:39:44.092889  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.094032  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:39:44.101275  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:39:44.103871  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:39:44.106750  761388 addons.go:234] Setting addon default-storageclass=true in "addons-685250"
	I0919 18:39:44.106804  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.107282  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	W0919 18:39:44.109675  761388 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:39:44.110326  761388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:39:44.110334  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:39:44.112386  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.112408  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:39:44.112472  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.112565  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:39:44.113382  761388 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:39:44.114898  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:39:44.114906  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:39:44.114925  761388 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:39:44.114984  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.116662  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:39:44.116682  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:39:44.116748  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.119259  761388 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:39:44.120516  761388 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:39:44.120540  761388 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:39:44.120610  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.123773  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.136078  761388 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:39:44.138681  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.138709  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:39:44.138773  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.144207  761388 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:39:44.145527  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.145578  761388 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:39:44.146995  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:44.147017  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:39:44.147076  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.152809  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.156308  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:39:44.157886  761388 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:39:44.157903  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:39:44.157925  761388 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:39:44.157985  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.162886  761388 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.162909  761388 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:39:44.162966  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.163450  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.166881  761388 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:44.166906  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:39:44.166969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.172034  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.180781  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:39:44.183673  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.189557  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.190040  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.198542  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.202993  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.203703  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.205321  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.208823  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.209666  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	W0919 18:39:44.241755  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241799  761388 retry.go:31] will retry after 368.513545ms: ssh: handshake failed: EOF
	W0919 18:39:44.241901  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241912  761388 retry.go:31] will retry after 353.358743ms: ssh: handshake failed: EOF
	W0919 18:39:44.241992  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.242019  761388 retry.go:31] will retry after 239.291473ms: ssh: handshake failed: EOF
	I0919 18:39:44.351392  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:44.437649  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.536099  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.541975  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:39:44.542004  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:39:44.544666  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.646013  761388 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:39:44.646047  761388 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:39:44.743483  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.743812  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:39:44.743879  761388 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:39:44.839790  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:39:44.839821  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:39:44.840867  761388 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:39:44.840892  761388 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:39:44.844891  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:44.844913  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:39:44.859724  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:39:44.859754  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:39:44.945601  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.948297  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:39:44.948369  761388 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:39:44.953207  761388 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:44.953285  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:39:45.049434  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:45.049642  761388 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:39:45.049698  761388 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:39:45.055848  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:39:45.055950  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:39:45.058998  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:39:45.059024  761388 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:39:45.141944  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:39:45.141986  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:39:45.156162  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:45.246810  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:39:45.246840  761388 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:39:45.256490  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:45.437813  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:45.441833  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.441871  761388 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:39:45.549176  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:39:45.549265  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:39:45.637502  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:39:45.637591  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:39:45.642826  761388 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2913856s)
	I0919 18:39:45.644038  761388 node_ready.go:35] waiting up to 6m0s for node "addons-685250" to be "Ready" ...
	I0919 18:39:45.644391  761388 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463571637s)
	I0919 18:39:45.644468  761388 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:39:45.647199  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.647259  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:39:45.737336  761388 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:39:45.737429  761388 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:39:45.754802  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:39:45.754834  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:39:45.836195  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:39:45.836236  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:39:45.851797  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.936024  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.956936  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:39:45.956972  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:39:46.159873  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:39:46.159908  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:39:46.337448  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:39:46.337478  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:39:46.356760  761388 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685250" context rescaled to 1 replicas
	I0919 18:39:46.436892  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:39:46.436928  761388 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:39:46.537037  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:39:46.537072  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:39:46.746236  761388 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:46.746266  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:39:46.854918  761388 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:39:46.855018  761388 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:39:46.946936  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:39:46.946983  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:39:47.236798  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:39:47.236841  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:39:47.246825  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:47.257114  761388 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.257149  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:39:47.453170  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.542740  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:39:47.542772  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:39:47.659810  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:47.759785  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:47.759819  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:39:47.957548  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:50.147172  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:50.150873  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713170158s)
	I0919 18:39:50.150919  761388 addons.go:475] Verifying addon ingress=true in "addons-685250"
	I0919 18:39:50.150938  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.614729552s)
	I0919 18:39:50.151045  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.606300895s)
	I0919 18:39:50.151091  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.407584065s)
	I0919 18:39:50.151204  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.205541455s)
	I0919 18:39:50.151283  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.101743958s)
	I0919 18:39:50.151334  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.995098572s)
	I0919 18:39:50.151399  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.89486624s)
	I0919 18:39:50.151505  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.713655603s)
	I0919 18:39:50.151528  761388 addons.go:475] Verifying addon registry=true in "addons-685250"
	I0919 18:39:50.151594  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.29976078s)
	I0919 18:39:50.151618  761388 addons.go:475] Verifying addon metrics-server=true in "addons-685250"
	I0919 18:39:50.151657  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.215596812s)
	I0919 18:39:50.152907  761388 out.go:177] * Verifying ingress addon...
	I0919 18:39:50.153936  761388 out.go:177] * Verifying registry addon...
	I0919 18:39:50.153951  761388 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685250 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:39:50.155824  761388 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:39:50.157505  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0919 18:39:50.163513  761388 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:39:50.238665  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:39:50.238695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.238959  761388 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:39:50.238987  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.660404  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.662046  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.877367  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.630488674s)
	W0919 18:39:50.877434  761388 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877461  761388 retry.go:31] will retry after 374.811419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877563  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.424342572s)
	I0919 18:39:51.159983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.160342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.251656  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.294045721s)
	I0919 18:39:51.251706  761388 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:51.252726  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:51.253330  761388 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:39:51.255845  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:39:51.260109  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:39:51.260134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:51.299405  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:39:51.299470  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.319259  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.435849  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:39:51.455177  761388 addons.go:234] Setting addon gcp-auth=true in "addons-685250"
	I0919 18:39:51.455235  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:51.455622  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:51.473709  761388 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:39:51.473768  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.492852  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.660242  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.660451  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.763672  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.148125  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:52.160486  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.160637  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.260177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.659866  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.759357  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.159414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.160699  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.260412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.660465  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.660995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.760079  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.036339  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783560208s)
	I0919 18:39:54.036401  761388 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.56265651s)
	I0919 18:39:54.037930  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:54.039158  761388 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:39:54.040281  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:39:54.040295  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:39:54.060953  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:39:54.060982  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:39:54.078061  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.078081  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:39:54.096196  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.159825  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.161174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.259118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.649396  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:54.664552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.666437  761388 addons.go:475] Verifying addon gcp-auth=true in "addons-685250"
	I0919 18:39:54.666458  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.669012  761388 out.go:177] * Verifying gcp-auth addon...
	I0919 18:39:54.671405  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:39:54.762155  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.762165  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:39:54.762193  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.159689  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.161131  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.174401  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.259291  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.659983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.758821  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.159552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.161022  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.174326  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.259237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.660149  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.660452  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.675011  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.759761  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.147230  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:57.160802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.160843  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.174625  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.259483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.659641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.660974  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.674433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.759804  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.159364  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.160396  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.175074  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.258973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.659663  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.659995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.674333  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.759220  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.159931  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.160111  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.174241  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.259030  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.647936  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:59.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.674569  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.759432  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.160240  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.160488  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.174961  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.259892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.660179  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.660554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.675141  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.758994  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.174593  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.259801  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.659777  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.660892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.674204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.759169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.147887  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:02.160172  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.160247  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.174624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.259598  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.659674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.660694  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.674100  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.759727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.159593  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.160617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.174020  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.259297  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.660462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.660957  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.674094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.759774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.159328  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.160575  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.174927  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.259749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.647664  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:04.659478  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.661089  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.759138  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.160148  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.160420  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.174732  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.259905  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.659969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.660156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.674731  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.759280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.160047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.160189  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.174412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.259142  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.660052  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.660419  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.674781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.759973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.147840  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:07.159737  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.160196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.174616  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.259365  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.659184  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.660781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.674067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.758888  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.160134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.160271  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.174692  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.259835  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.659150  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.660428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.674754  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.759483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.159321  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.160653  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.175114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.260634  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.647196  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:09.659462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.660545  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.674993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.759810  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.159952  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.161096  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.174611  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.259487  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.659118  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.660327  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.674867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.759802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.159342  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.160885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.173987  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.259734  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.647819  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:11.659862  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.660211  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.674274  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.759168  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.160283  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.160439  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.175052  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.260097  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.659816  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.660819  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.674404  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.759164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.160264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.160357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.174537  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.259736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.660466  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.660513  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.674991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.759495  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.146772  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:14.159525  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.159867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.174094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.260124  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.660152  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.660362  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.674852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.759444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.159996  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.160894  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.174310  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.259417  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.659374  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.660883  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.674695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.759222  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.147487  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:16.159970  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.160975  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.174207  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.258997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.660164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.660247  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.674461  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.759434  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.160167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.160211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.658940  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.660444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.674638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.759422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.159603  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.160463  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.174991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.258926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.647877  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:18.660091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.660270  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.759470  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.160102  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.160359  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.174708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.259350  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.659690  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.660560  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.673993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.759643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.159760  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.160739  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.174018  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.259759  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.659618  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.660617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.673972  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.759708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.147628  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:21.159869  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.161165  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.174520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.259323  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.659211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.660585  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.673760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.159736  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.160153  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.174301  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.259002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.659694  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.661106  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.674760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.759413  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.159284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.160467  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.174960  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.259223  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.647843  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:23.659948  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.659983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.674196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.758885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.159695  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.160775  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.174128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.260104  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.660632  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.661828  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.674068  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.759900  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.159730  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.160014  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.174822  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.259570  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.659440  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.660392  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.674818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.759718  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.147606  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:26.159628  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.161042  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.174701  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.259645  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.661426  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.662087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.674503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.759217  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.159812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.160262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.174635  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.259405  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.659575  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.660727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.674227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.759021  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.147837  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:28.160082  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.160114  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.174316  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.646812  761388 node_ready.go:49] node "addons-685250" has status "Ready":"True"
	I0919 18:40:28.646840  761388 node_ready.go:38] duration metric: took 43.002724586s for node "addons-685250" to be "Ready" ...
	I0919 18:40:28.646862  761388 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:28.657370  761388 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:28.665479  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:28.665601  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.666301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.673925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.761809  761388 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:28.761844  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.160890  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.161414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.174200  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.262793  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.666949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.668214  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.673941  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.760517  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.160901  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.165455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.238277  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.261435  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.665010  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.665243  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.740441  761388 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.740475  761388 pod_ready.go:82] duration metric: took 2.083070651s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740502  761388 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.749009  761388 pod_ready.go:93] pod "etcd-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.749034  761388 pod_ready.go:82] duration metric: took 8.524276ms for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.749051  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755475  761388 pod_ready.go:93] pod "kube-apiserver-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.755499  761388 pod_ready.go:82] duration metric: took 6.439358ms for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755513  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837071  761388 pod_ready.go:93] pod "kube-controller-manager-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.837158  761388 pod_ready.go:82] duration metric: took 81.634686ms for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837180  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.842181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.843110  761388 pod_ready.go:93] pod "kube-proxy-tt5h8" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.843130  761388 pod_ready.go:82] duration metric: took 5.940025ms for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.843141  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064216  761388 pod_ready.go:93] pod "kube-scheduler-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:31.064250  761388 pod_ready.go:82] duration metric: took 221.10192ms for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064264  761388 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.160309  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.161868  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.175154  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.261445  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.661945  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.662739  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.674262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.764171  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.160964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.161120  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.175453  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.261255  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.660913  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.661774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.675133  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.760592  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.070854  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:33.161051  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.161301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.175286  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.260865  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.660702  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.661852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.675273  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.760668  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.160546  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.161086  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.174285  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.260753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.661118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.661516  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.675418  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.760922  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.071857  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:35.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.160768  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.175281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.260345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.660487  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.661415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.674901  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.760686  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.160095  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.161029  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.174515  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.260186  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.660284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.661541  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.674751  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.760998  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.160677  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.160812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.174659  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.260012  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.569850  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:37.660726  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.661114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.674871  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.762472  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.160011  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.161167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.236912  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.261156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.660760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.661073  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.675428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.760681  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.160674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.161278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.174402  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.259952  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.570471  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:39.660746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.661314  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.675826  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.760609  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.160453  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.161002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.175034  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.261000  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.660533  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.661321  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.760519  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.160473  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.161342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.174400  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.259949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.570843  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:41.660891  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.661331  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.675442  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.761658  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.159681  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.161135  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.175056  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.260520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.660591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.660622  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.675267  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.761379  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.160638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.161031  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.241441  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.261128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.641195  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:43.660811  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.660936  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.674877  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.761319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.160296  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.161343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.174926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.260471  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.660490  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.661342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.674851  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.760497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.160507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.160595  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.174852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.260568  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.660293  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.660999  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.674670  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.761087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.070190  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:46.160550  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.160867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.174270  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.260149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.660826  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.661696  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.676864  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.760955  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.160938  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.161615  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.175003  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.260783  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.660110  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.663272  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.701700  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.760283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.159939  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.160947  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.174393  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.261025  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.570860  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:48.660740  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.661222  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.761763  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.160005  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.160755  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.175182  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.260174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.661013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.661304  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.675895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.777512  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.160946  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.160950  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.174204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.259800  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.660357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.661468  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.674771  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.760091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.069537  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:51.160657  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.161375  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.174522  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.260449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.660943  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.661436  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.679949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.760555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.160884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.161969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.175511  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.260422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.660009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.661427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.674747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.760455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.069882  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:53.160723  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.160847  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.175048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.260265  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.660742  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.660975  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.675736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.760427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.160554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.175527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.261623  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.661044  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.661280  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.674256  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.762345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.161624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.161856  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.177557  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.260964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.571599  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:55.660145  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.661293  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.674636  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.760666  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.160746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.161295  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.174304  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.259893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.660305  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.661330  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.759937  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.161201  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.161367  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.174319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.259921  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.660452  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.661521  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.675492  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.760449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.071078  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:58.166319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.167684  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.174484  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.261744  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.739476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.740647  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.741278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.843925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.250851  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.348633  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.349162  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.352318  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.660355  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.662169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.737125  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.761343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.071258  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:00.161047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.161410  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.175212  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.261071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.661009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.662071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.674963  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.761260  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.160995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.161522  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.174377  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.261177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.660419  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.661825  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.675387  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.760448  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.071634  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:02.160982  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.161497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.175139  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.262015  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.660625  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.661137  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.676415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.760266  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.160315  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.161430  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.174874  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.260917  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.660127  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.661283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.760962  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.761328  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.160941  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.161529  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.175159  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.260532  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.570304  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:04.660567  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.661503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.675149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.761527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.160742  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.161438  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.175035  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.260884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.660133  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.661095  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.674647  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.760505  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.160998  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.161237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.175185  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.261772  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.570424  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:06.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.661433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.675129  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.761340  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.160439  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.161643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.175553  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.260491  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.661227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.661700  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.674758  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.769893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.160882  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.161229  761388 kapi.go:107] duration metric: took 1m18.003722545s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:41:08.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.260993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.570813  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:08.661066  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.675397  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.761869  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.163441  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.260343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.261680  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.661162  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.738749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.761895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.161848  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.174642  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.261127  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.638793  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:10.660408  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.737983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.761997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.160636  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.238753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.260239  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.661077  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.675809  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.760946  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.160226  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.174555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.260120  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.660888  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.675281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.070755  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:13.159900  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.175280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.260711  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.674228  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.675067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.761264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.160557  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.174803  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.260591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.675045  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.761376  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.070790  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:15.161017  761388 kapi.go:107] duration metric: took 1m25.005187502s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:15.174846  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.261085  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.675476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.837474  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.268231  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.268764  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.676196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.760827  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.176212  761388 kapi.go:107] duration metric: took 1m22.504803809s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:41:17.177857  761388 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685250 cluster.
	I0919 18:41:17.179198  761388 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:41:17.180644  761388 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:41:17.262198  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.570361  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:17.760518  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.261747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.761118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.260370  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.570826  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:19.761115  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.260708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.761013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.260276  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.571353  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:21.760456  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.260815  761388 kapi.go:107] duration metric: took 1m31.004968765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:41:22.262816  761388 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:41:22.264198  761388 addons.go:510] duration metric: took 1m38.250564753s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:41:24.069345  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:26.070338  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:28.571150  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:31.069639  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:33.069801  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:35.069951  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070152  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.570142  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.570373  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:44.069797  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.070575  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.570352  761388 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.570378  761388 pod_ready.go:82] duration metric: took 1m15.506104425s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.570389  761388 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574639  761388 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.574659  761388 pod_ready.go:82] duration metric: took 4.26409ms for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574677  761388 pod_ready.go:39] duration metric: took 1m17.927800889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:46.574695  761388 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:41:46.574727  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:46.574775  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:46.610505  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:46.610525  761388 cri.go:89] found id: ""
	I0919 18:41:46.610532  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:46.610585  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.614097  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:46.614166  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:46.647964  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:46.647984  761388 cri.go:89] found id: ""
	I0919 18:41:46.647992  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:46.648034  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.651737  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:46.651827  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:46.685728  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:46.685751  761388 cri.go:89] found id: ""
	I0919 18:41:46.685761  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:46.685842  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.689509  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:46.689602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:46.723120  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:46.723148  761388 cri.go:89] found id: ""
	I0919 18:41:46.723159  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:46.723206  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.726505  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:46.726561  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:46.764041  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.764067  761388 cri.go:89] found id: ""
	I0919 18:41:46.764076  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:46.764139  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.767386  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:46.767456  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:46.801334  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:46.801362  761388 cri.go:89] found id: ""
	I0919 18:41:46.801373  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:46.801437  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.804747  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:46.804810  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:46.838269  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:46.838289  761388 cri.go:89] found id: ""
	I0919 18:41:46.838297  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:46.838353  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.841583  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:46.841608  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:46.939796  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:46.939825  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.973962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:46.973996  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:47.040527  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:47.040563  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:47.079512  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:47.079548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:47.156835  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:47.156873  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:47.244389  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:47.244425  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:47.291698  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:47.291734  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:47.339857  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:47.339892  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:47.378377  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:47.378414  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:47.419595  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:47.419631  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:47.461066  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:47.461101  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:49.991902  761388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:41:50.006246  761388 api_server.go:72] duration metric: took 2m5.992641544s to wait for apiserver process to appear ...
	I0919 18:41:50.006277  761388 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:41:50.006316  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:50.006369  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:50.040275  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.040319  761388 cri.go:89] found id: ""
	I0919 18:41:50.040329  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:50.040373  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.043705  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:50.043766  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:50.078798  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.078819  761388 cri.go:89] found id: ""
	I0919 18:41:50.078826  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:50.078884  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.082274  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:50.082341  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:50.116003  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.116024  761388 cri.go:89] found id: ""
	I0919 18:41:50.116032  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:50.116082  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.119438  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:50.119496  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:50.153370  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.153390  761388 cri.go:89] found id: ""
	I0919 18:41:50.153398  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:50.153451  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.156934  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:50.156999  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:50.191346  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.191372  761388 cri.go:89] found id: ""
	I0919 18:41:50.191381  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:50.191442  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.195442  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:50.195523  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:50.230094  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.230116  761388 cri.go:89] found id: ""
	I0919 18:41:50.230126  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:50.230173  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.233591  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:50.233648  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:50.267946  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.267968  761388 cri.go:89] found id: ""
	I0919 18:41:50.267976  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:50.268020  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.271492  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:50.271521  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.315171  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:50.315204  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.350242  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:50.350276  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.406986  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:50.407024  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.443914  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:50.443950  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:50.522117  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:50.522161  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:50.603999  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:50.604036  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:50.633867  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:50.633909  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:50.735662  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:50.735694  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.778766  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:50.778800  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.822323  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:50.822362  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.858212  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:50.858244  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.402426  761388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:41:53.406334  761388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:41:53.407293  761388 api_server.go:141] control plane version: v1.31.1
	I0919 18:41:53.407337  761388 api_server.go:131] duration metric: took 3.401052443s to wait for apiserver health ...
	I0919 18:41:53.407348  761388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:41:53.407372  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:53.407424  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:53.442342  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:53.442368  761388 cri.go:89] found id: ""
	I0919 18:41:53.442378  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:53.442443  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.445843  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:53.445911  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:53.479392  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:53.479417  761388 cri.go:89] found id: ""
	I0919 18:41:53.479427  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:53.479483  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.482761  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:53.482821  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:53.517132  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.517157  761388 cri.go:89] found id: ""
	I0919 18:41:53.517169  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:53.517224  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.520542  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:53.520602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:53.554085  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.554107  761388 cri.go:89] found id: ""
	I0919 18:41:53.554116  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:53.554174  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.557699  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:53.557779  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:53.591682  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:53.591703  761388 cri.go:89] found id: ""
	I0919 18:41:53.591711  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:53.591755  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.595094  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:53.595172  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:53.630170  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.630192  761388 cri.go:89] found id: ""
	I0919 18:41:53.630199  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:53.630257  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.633583  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:53.633636  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:53.667431  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.667451  761388 cri.go:89] found id: ""
	I0919 18:41:53.667459  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:53.667505  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.670883  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:53.670906  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.707961  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:53.707993  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.749962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:53.749997  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.808507  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:53.808548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.843831  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:53.843860  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.886934  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:53.886962  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:53.965269  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:53.965305  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:54.000130  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:54.000165  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:54.102256  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:54.102283  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:54.180041  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:54.180082  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:54.225323  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:54.225355  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:54.270873  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:54.270914  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:56.816722  761388 system_pods.go:59] 19 kube-system pods found
	I0919 18:41:56.816754  761388 system_pods.go:61] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.816759  761388 system_pods.go:61] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.816763  761388 system_pods.go:61] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.816767  761388 system_pods.go:61] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.816770  761388 system_pods.go:61] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.816773  761388 system_pods.go:61] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.816777  761388 system_pods.go:61] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.816780  761388 system_pods.go:61] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.816783  761388 system_pods.go:61] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.816787  761388 system_pods.go:61] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.816791  761388 system_pods.go:61] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.816796  761388 system_pods.go:61] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.816800  761388 system_pods.go:61] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.816805  761388 system_pods.go:61] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.816814  761388 system_pods.go:61] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.816821  761388 system_pods.go:61] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.816825  761388 system_pods.go:61] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.816831  761388 system_pods.go:61] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.816836  761388 system_pods.go:61] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.816844  761388 system_pods.go:74] duration metric: took 3.409487976s to wait for pod list to return data ...
	I0919 18:41:56.816856  761388 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:41:56.819044  761388 default_sa.go:45] found service account: "default"
	I0919 18:41:56.819064  761388 default_sa.go:55] duration metric: took 2.201823ms for default service account to be created ...
	I0919 18:41:56.819072  761388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:41:56.827195  761388 system_pods.go:86] 19 kube-system pods found
	I0919 18:41:56.827219  761388 system_pods.go:89] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.827224  761388 system_pods.go:89] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.827229  761388 system_pods.go:89] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.827232  761388 system_pods.go:89] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.827236  761388 system_pods.go:89] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.827239  761388 system_pods.go:89] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.827243  761388 system_pods.go:89] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.827246  761388 system_pods.go:89] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.827250  761388 system_pods.go:89] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.827254  761388 system_pods.go:89] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.827258  761388 system_pods.go:89] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.827261  761388 system_pods.go:89] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.827264  761388 system_pods.go:89] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.827267  761388 system_pods.go:89] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.827270  761388 system_pods.go:89] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.827273  761388 system_pods.go:89] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.827276  761388 system_pods.go:89] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.827279  761388 system_pods.go:89] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.827282  761388 system_pods.go:89] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.827287  761388 system_pods.go:126] duration metric: took 8.210478ms to wait for k8s-apps to be running ...
	I0919 18:41:56.827294  761388 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:41:56.827364  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:41:56.838722  761388 system_svc.go:56] duration metric: took 11.419899ms WaitForService to wait for kubelet
	I0919 18:41:56.838749  761388 kubeadm.go:582] duration metric: took 2m12.825152378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:41:56.838775  761388 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:41:56.841799  761388 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:41:56.841823  761388 node_conditions.go:123] node cpu capacity is 8
	I0919 18:41:56.841837  761388 node_conditions.go:105] duration metric: took 3.056374ms to run NodePressure ...
	I0919 18:41:56.841850  761388 start.go:241] waiting for startup goroutines ...
	I0919 18:41:56.841857  761388 start.go:246] waiting for cluster config update ...
	I0919 18:41:56.841872  761388 start.go:255] writing updated cluster config ...
	I0919 18:41:56.842127  761388 ssh_runner.go:195] Run: rm -f paused
	I0919 18:41:56.891468  761388 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:41:56.894630  761388 out.go:177] * Done! kubectl is now configured to use "addons-685250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.640497551Z" level=info msg="Stopping container: 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732 (timeout: 30s)" id=42ae5ec0-6333-403d-978c-538cde2afdb5 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.654120737Z" level=info msg="Stopped container 3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5: headlamp/headlamp-7b5c95b59d-ttv2g/headlamp" id=98d28edf-adeb-4713-83d4-f808920ea0c7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.654767165Z" level=info msg="Stopping pod sandbox: 185ae185e0b04b10dacbe55c49f22687b02412f7d2ea5d79cb25ad926ac48061" id=01289f8a-a7f4-4604-8c7b-377116951d17 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.655021463Z" level=info msg="Got pod network &{Name:headlamp-7b5c95b59d-ttv2g Namespace:headlamp ID:185ae185e0b04b10dacbe55c49f22687b02412f7d2ea5d79cb25ad926ac48061 UID:c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92 NetNS:/var/run/netns/33da4de3-2bda-4307-b987-c96b2b916755 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.655147103Z" level=info msg="Deleting pod headlamp_headlamp-7b5c95b59d-ttv2g from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.688908257Z" level=info msg="Stopped pod sandbox: 185ae185e0b04b10dacbe55c49f22687b02412f7d2ea5d79cb25ad926ac48061" id=01289f8a-a7f4-4604-8c7b-377116951d17 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.769311697Z" level=info msg="Stopped container 31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6: kube-system/registry-66c9cd494c-tsz4w/registry" id=b87bbee4-f0f7-4b98-9b85-80868e18c72d name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.769906598Z" level=info msg="Stopping pod sandbox: 06f604eddca6b2a35d224f4259bbf7b213186cae14a606cf1a44c7a777863715" id=439f5ea0-e6db-4c0d-9b8d-505583df7d7d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.770184577Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-tsz4w Namespace:kube-system ID:06f604eddca6b2a35d224f4259bbf7b213186cae14a606cf1a44c7a777863715 UID:bdd1e643-0c83-4fed-a147-0dd79f789e29 NetNS:/var/run/netns/fa712290-ee3e-4d78-a27b-143f6f00a73b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.770354406Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-tsz4w from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.782487621Z" level=info msg="Stopped container 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732: kube-system/registry-proxy-rgdgh/registry-proxy" id=42ae5ec0-6333-403d-978c-538cde2afdb5 name=/runtime.v1.RuntimeService/StopContainer
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.783075362Z" level=info msg="Stopping pod sandbox: 599f82da674f3a0687e5dedd13f1551acabcf099fe349a132e776449c46b809f" id=941e057d-4e43-4a24-ab6d-28845521d522 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.787080712Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-74CS74EVD5XD7KAW - [0:0]\n:KUBE-HP-EUPJVSCHHF6EZRSX - [0:0]\n:KUBE-HP-GU272LSRVAS4HDIZ - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a049-d1bcd2a8b19a_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-EUPJVSCHHF6EZRSX\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a049-d1bcd2a8b19a_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-GU272LSRVAS4HDIZ\n-A KUBE-HP-EUPJVSCHHF6EZRSX -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a049-d1bcd2a8b19a_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-EUPJVSCHHF6EZRSX -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a
049-d1bcd2a8b19a_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-GU272LSRVAS4HDIZ -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a049-d1bcd2a8b19a_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-GU272LSRVAS4HDIZ -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-jwqfz_ingress-nginx_64bc7843-1fb9-4837-a049-d1bcd2a8b19a_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-74CS74EVD5XD7KAW\nCOMMIT\n"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.789824644Z" level=info msg="Closing host port tcp:5000"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.791547995Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.791729920Z" level=info msg="Got pod network &{Name:registry-proxy-rgdgh Namespace:kube-system ID:599f82da674f3a0687e5dedd13f1551acabcf099fe349a132e776449c46b809f UID:fc0b3544-d729-4e33-a260-ef1ab277d08f NetNS:/var/run/netns/7c7f54f5-3db0-469a-b74d-817e9429003b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.791855859Z" level=info msg="Deleting pod kube-system_registry-proxy-rgdgh from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.808662488Z" level=info msg="Stopped pod sandbox: 06f604eddca6b2a35d224f4259bbf7b213186cae14a606cf1a44c7a777863715" id=439f5ea0-e6db-4c0d-9b8d-505583df7d7d name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:10 addons-685250 crio[1028]: time="2024-09-19 18:51:10.836938635Z" level=info msg="Stopped pod sandbox: 599f82da674f3a0687e5dedd13f1551acabcf099fe349a132e776449c46b809f" id=941e057d-4e43-4a24-ab6d-28845521d522 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.321746594Z" level=info msg="Removing container: 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732" id=252aff04-9920-431e-bc22-829c7cd9cfa8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.336985677Z" level=info msg="Removed container 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732: kube-system/registry-proxy-rgdgh/registry-proxy" id=252aff04-9920-431e-bc22-829c7cd9cfa8 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.339129282Z" level=info msg="Removing container: 31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6" id=5878fe5c-32f4-4d0b-8a7c-7fc2127d4569 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.356331398Z" level=info msg="Removed container 31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6: kube-system/registry-66c9cd494c-tsz4w/registry" id=5878fe5c-32f4-4d0b-8a7c-7fc2127d4569 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.358097689Z" level=info msg="Removing container: 3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5" id=106159ca-389b-48c9-9a43-2c7e6ae20c1c name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:51:11 addons-685250 crio[1028]: time="2024-09-19 18:51:11.373167968Z" level=info msg="Removed container 3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5: headlamp/headlamp-7b5c95b59d-ttv2g/headlamp" id=106159ca-389b-48c9-9a43-2c7e6ae20c1c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a80bb1fa74994       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   77bf5c5c44289       gadget-5nngx
	9631f3dbcf504       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          9 minutes ago       Running             csi-snapshotter                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	96030830b51d1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          9 minutes ago       Running             csi-provisioner                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	32bc4d23668fc       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            9 minutes ago       Running             liveness-probe                           0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	0cc2312cf82a4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           9 minutes ago       Running             hostpath                                 0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	8763c1c636d0e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 9 minutes ago       Running             gcp-auth                                 0                   c4905e6f06668       gcp-auth-89d5ffd79-5xmj7
	6ec44220259bc       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             9 minutes ago       Running             controller                               0                   7eeed172b87cd       ingress-nginx-controller-bc57996ff-jwqfz
	533fe244bc19f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                10 minutes ago      Running             node-driver-registrar                    0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	c342e4862e372       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     10 minutes ago      Running             nvidia-device-plugin-ctr                 0                   c62bdd278bf41       nvidia-device-plugin-daemonset-lnffq
	781e8a586344e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              10 minutes ago      Running             csi-resizer                              0                   79d20db0c7bd8       csi-hostpath-resizer-0
	135118d48b8e5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   10 minutes ago      Exited              patch                                    0                   b5047ec8d653b       ingress-nginx-admission-patch-zkk9z
	6148ff93b7e21       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   2c111431a9537       snapshot-controller-56fcc65765-hpwtx
	e9b1047d4987f       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              10 minutes ago      Running             yakd                                     0                   edebb43bb417f       yakd-dashboard-67d98fc6b-d475t
	776cccb0a5bb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   10 minutes ago      Running             csi-external-health-monitor-controller   0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	ae42c7830ff31       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      10 minutes ago      Running             volume-snapshot-controller               0                   a67d1128cd369       snapshot-controller-56fcc65765-qsngh
	3bae675b3b545       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   10 minutes ago      Exited              create                                   0                   00fa51ee04653       ingress-nginx-admission-create-rqqsb
	3def0c19497bb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        10 minutes ago      Running             metrics-server                           0                   4dc38a01fe945       metrics-server-84c5f94fbc-gpv2k
	8ebd415ab1711       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  10 minutes ago      Running             tiller                                   0                   ad46a3adef276       tiller-deploy-b48cc5f79-64k5s
	cd361280e82f5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             10 minutes ago      Running             csi-attacher                             0                   995144454e795       csi-hostpath-attacher-0
	71455e9d9d7f9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             10 minutes ago      Running             minikube-ingress-dns                     0                   1b3ebc5c0bddd       kube-ingress-dns-minikube
	c265d33c64155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             10 minutes ago      Running             storage-provisioner                      0                   f0b8765d93237       storage-provisioner
	61dc325585534       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             10 minutes ago      Running             coredns                                  0                   70191f5a80edd       coredns-7c65d6cfc9-xxkrh
	28c707c30998a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             11 minutes ago      Running             kindnet-cni                              0                   d0d4a24bd5f33       kindnet-nr24c
	1577029617c13       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             11 minutes ago      Running             kube-proxy                               0                   006fe668e3bca       kube-proxy-tt5h8
	a9c5d6500618f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             11 minutes ago      Running             kube-scheduler                           0                   6a497d68d67db       kube-scheduler-addons-685250
	4b38bddc95b37       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             11 minutes ago      Running             kube-controller-manager                  0                   8dc935b2a1118       kube-controller-manager-addons-685250
	daa04e6dadb8c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             11 minutes ago      Running             etcd                                     0                   49d2cd4b861cb       etcd-addons-685250
	d48e736f52b35       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             11 minutes ago      Running             kube-apiserver                           0                   ee84a44e45fe4       kube-apiserver-addons-685250
	
	
	==> coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] <==
	[INFO] 10.244.0.18:34436 - 35698 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108309s
	[INFO] 10.244.0.18:53834 - 64751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039533s
	[INFO] 10.244.0.18:53834 - 26861 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063287s
	[INFO] 10.244.0.18:40724 - 19030 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005948549s
	[INFO] 10.244.0.18:40724 - 2384 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00624164s
	[INFO] 10.244.0.18:55178 - 49717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004779846s
	[INFO] 10.244.0.18:55178 - 43576 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008989283s
	[INFO] 10.244.0.18:35236 - 29185 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005503532s
	[INFO] 10.244.0.18:35236 - 29053 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006569969s
	[INFO] 10.244.0.18:58901 - 23064 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007067s
	[INFO] 10.244.0.18:58901 - 45339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090322s
	[INFO] 10.244.0.21:52948 - 4177 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227224s
	[INFO] 10.244.0.21:45787 - 22571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317788s
	[INFO] 10.244.0.21:59704 - 52899 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152904s
	[INFO] 10.244.0.21:50018 - 4022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239218s
	[INFO] 10.244.0.21:53553 - 39101 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141888s
	[INFO] 10.244.0.21:37741 - 20732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000217668s
	[INFO] 10.244.0.21:55394 - 50618 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005906983s
	[INFO] 10.244.0.21:37603 - 64460 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00595091s
	[INFO] 10.244.0.21:43538 - 27403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006051611s
	[INFO] 10.244.0.21:54216 - 9854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00637344s
	[INFO] 10.244.0.21:36139 - 65099 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007481578s
	[INFO] 10.244.0.21:49105 - 14009 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010893085s
	[INFO] 10.244.0.21:52556 - 17077 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000849386s
	[INFO] 10.244.0.21:56780 - 3812 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000933647s
	
	
	==> describe nodes <==
	Name:               addons-685250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-685250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685250
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-685250"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685250
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:51:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:50:11 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:50:11 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:50:11 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:50:11 +0000   Thu, 19 Sep 2024 18:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-685250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 59964951ae744ca891a1d33d48395cb6
	  System UUID:                ca4c5e3c-dd72-4ffd-b420-cdf7d87c497b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  gadget                      gadget-5nngx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-5xmj7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jwqfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-xxkrh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-wvvls                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-685250                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-nr24c                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-685250                250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-addons-685250       200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-tt5h8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-685250                100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-gpv2k             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-lnffq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-hpwtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-qsngh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-64k5s               0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-d475t              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node addons-685250 event: Registered Node addons-685250 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-685250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 9c 9b da 37 02 42 c0 a8 55 02 08 00
	[ +49.810034] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] <==
	{"level":"info","ts":"2024-09-19T18:39:45.750437Z","caller":"traceutil/trace.go:171","msg":"trace[636247722] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"105.789663ms","start":"2024-09-19T18:39:45.644636Z","end":"2024-09-19T18:39:45.750426Z","steps":["trace[636247722] 'process raft request'  (duration: 105.172319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.750586Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.35354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-685250\" ","response":"range_response_count:1 size:5655"}
	{"level":"info","ts":"2024-09-19T18:39:45.750614Z","caller":"traceutil/trace.go:171","msg":"trace[2105579023] range","detail":"{range_begin:/registry/minions/addons-685250; range_end:; response_count:1; response_revision:393; }","duration":"101.39549ms","start":"2024-09-19T18:39:45.649211Z","end":"2024-09-19T18:39:45.750606Z","steps":["trace[2105579023] 'agreement among raft nodes before linearized reading'  (duration: 101.326831ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:45.855653Z","caller":"traceutil/trace.go:171","msg":"trace[11607049] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"105.61545ms","start":"2024-09-19T18:39:45.750016Z","end":"2024-09-19T18:39:45.855632Z","steps":["trace[11607049] 'read index received'  (duration: 86.226896ms)","trace[11607049] 'applied index is now lower than readState.Index'  (duration: 19.387979ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:45.855963Z","caller":"traceutil/trace.go:171","msg":"trace[722294032] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"106.750007ms","start":"2024-09-19T18:39:45.749192Z","end":"2024-09-19T18:39:45.855942Z","steps":["trace[722294032] 'process raft request'  (duration: 100.852428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.988653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-19T18:39:45.856224Z","caller":"traceutil/trace.go:171","msg":"trace[83912261] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"202.035355ms","start":"2024-09-19T18:39:45.654180Z","end":"2024-09-19T18:39:45.856215Z","steps":["trace[83912261] 'agreement among raft nodes before linearized reading'  (duration: 201.947574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.947549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856402Z","caller":"traceutil/trace.go:171","msg":"trace[297556485] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:395; }","duration":"206.977474ms","start":"2024-09-19T18:39:45.649415Z","end":"2024-09-19T18:39:45.856393Z","steps":["trace[297556485] 'agreement among raft nodes before linearized reading'  (duration: 206.93087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.416757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856554Z","caller":"traceutil/trace.go:171","msg":"trace[47804488] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:395; }","duration":"103.442648ms","start":"2024-09-19T18:39:45.753105Z","end":"2024-09-19T18:39:45.856548Z","steps":["trace[47804488] 'agreement among raft nodes before linearized reading'  (duration: 103.402348ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.450928Z","caller":"traceutil/trace.go:171","msg":"trace[447015363] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"192.15555ms","start":"2024-09-19T18:39:46.258754Z","end":"2024-09-19T18:39:46.450910Z","steps":["trace[447015363] 'process raft request'  (duration: 192.041293ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.457451Z","caller":"traceutil/trace.go:171","msg":"trace[199583041] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.841342ms","start":"2024-09-19T18:39:46.354595Z","end":"2024-09-19T18:39:46.457437Z","steps":["trace[199583041] 'process raft request'  (duration: 102.766841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:47.149186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.608135ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005940909206 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" mod_revision:386 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" value_size:3943 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:47.149875Z","caller":"traceutil/trace.go:171","msg":"trace[786871471] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"212.562991ms","start":"2024-09-19T18:39:46.937292Z","end":"2024-09-19T18:39:47.149855Z","steps":["trace[786871471] 'process raft request'  (duration: 110.633244ms)","trace[786871471] 'compare'  (duration: 100.378906ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:47.150124Z","caller":"traceutil/trace.go:171","msg":"trace[713102619] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"212.118368ms","start":"2024-09-19T18:39:46.937993Z","end":"2024-09-19T18:39:47.150111Z","steps":["trace[713102619] 'process raft request'  (duration: 211.29202ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150315Z","caller":"traceutil/trace.go:171","msg":"trace[1466387580] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"203.943604ms","start":"2024-09-19T18:39:46.946361Z","end":"2024-09-19T18:39:47.150305Z","steps":["trace[1466387580] 'process raft request'  (duration: 203.030294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150417Z","caller":"traceutil/trace.go:171","msg":"trace[1484778379] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"202.338487ms","start":"2024-09-19T18:39:46.948072Z","end":"2024-09-19T18:39:47.150411Z","steps":["trace[1484778379] 'process raft request'  (duration: 201.364589ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150492Z","caller":"traceutil/trace.go:171","msg":"trace[1762014815] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:419; }","duration":"204.192549ms","start":"2024-09-19T18:39:46.946292Z","end":"2024-09-19T18:39:47.150485Z","steps":["trace[1762014815] 'read index received'  (duration: 101.644452ms)","trace[1762014815] 'applied index is now lower than readState.Index'  (duration: 102.547441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:39:47.150718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.417513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:47.150742Z","caller":"traceutil/trace.go:171","msg":"trace[30934350] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"204.449131ms","start":"2024-09-19T18:39:46.946286Z","end":"2024-09-19T18:39:47.150735Z","steps":["trace[30934350] 'agreement among raft nodes before linearized reading'  (duration: 204.399184ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:41:08.113307Z","caller":"traceutil/trace.go:171","msg":"trace[1867049731] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"218.87531ms","start":"2024-09-19T18:41:07.893123Z","end":"2024-09-19T18:41:08.111998Z","steps":["trace[1867049731] 'process raft request'  (duration: 146.821964ms)","trace[1867049731] 'compare'  (duration: 71.937946ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:49:35.458285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1609}
	{"level":"info","ts":"2024-09-19T18:49:35.481341Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1609,"took":"22.590141ms","hash":3032817660,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3510272,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-19T18:49:35.481386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3032817660,"revision":1609,"compact-revision":-1}
	
	
	==> gcp-auth [8763c1c636d0e544cec68dd7fd43a6178da8c1609fed0cf08b900e90bcd721ae] <==
	2024/09/19 18:41:16 GCP Auth Webhook started!
	2024/09/19 18:41:56 Ready to marshal response ...
	2024/09/19 18:41:56 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:06 Ready to marshal response ...
	2024/09/19 18:50:06 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	
	
	==> kernel <==
	 18:51:12 up  3:33,  0 users,  load average: 0.47, 0.27, 0.52
	Linux addons-685250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] <==
	I0919 18:49:08.359434       1 main.go:299] handling current node
	I0919 18:49:18.352576       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:49:18.352625       1 main.go:299] handling current node
	I0919 18:49:28.351414       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:49:28.351457       1 main.go:299] handling current node
	I0919 18:49:38.356829       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:49:38.356864       1 main.go:299] handling current node
	I0919 18:49:48.351429       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:49:48.351471       1 main.go:299] handling current node
	I0919 18:49:58.355400       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:49:58.355442       1 main.go:299] handling current node
	I0919 18:50:08.351561       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:08.351601       1 main.go:299] handling current node
	I0919 18:50:18.351388       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:18.351445       1 main.go:299] handling current node
	I0919 18:50:28.351883       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:28.351916       1 main.go:299] handling current node
	I0919 18:50:38.355848       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:38.355883       1 main.go:299] handling current node
	I0919 18:50:48.351363       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:48.351403       1 main.go:299] handling current node
	I0919 18:50:58.354892       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:50:58.354930       1 main.go:299] handling current node
	I0919 18:51:08.352424       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:51:08.352461       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] <==
	E0919 18:40:49.662229       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 18:40:49.663352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 18:40:49.663366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 18:41:46.384712       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 18:41:46.384797       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 18:41:46.384826       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.77.71:443: connect: connection refused" logger="UnhandledError"
	I0919 18:41:46.398246       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 18:50:10.564173       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.569821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.575508       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:25.576915       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:30.878332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:31.884590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:32.891043       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:33.897594       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:34.904265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:35.910640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:36.916660       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:37.922615       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:38.928704       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:39.935718       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 18:50:59.939369       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.7.39"}
	
	
	==> kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] <==
	I0919 18:41:28.013346       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0919 18:41:28.059368       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0919 18:41:30.615340       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="8.764848ms"
	I0919 18:41:30.615452       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="63.248µs"
	I0919 18:41:41.443519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	E0919 18:41:43.853676       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 18:41:44.260680       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0919 18:41:46.380352       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="12.647582ms"
	I0919 18:41:46.380469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="71.787µs"
	I0919 18:41:47.009083       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0919 18:41:47.037104       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0919 18:46:46.848442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	I0919 18:50:10.155585       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="6.222µs"
	I0919 18:50:11.770760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	I0919 18:50:57.709234       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0919 18:50:59.198165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="18.178µs"
	I0919 18:50:59.969650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="17.290878ms"
	I0919 18:50:59.974519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="4.820235ms"
	I0919 18:50:59.974600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="36.164µs"
	I0919 18:50:59.983156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="106.51µs"
	I0919 18:51:04.310434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="80.243µs"
	I0919 18:51:04.324443       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="5.240358ms"
	I0919 18:51:04.324520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="40.563µs"
	I0919 18:51:10.477775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="11.904µs"
	I0919 18:51:10.598081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.385µs"
	
	
	==> kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] <==
	I0919 18:39:47.957278       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:39:49.044392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:39:49.044560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:49.357227       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:39:49.357310       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:49.437470       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:49.438149       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:49.438227       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:49.444383       1 config.go:199] "Starting service config controller"
	I0919 18:39:49.444434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:49.444451       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:49.444468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:49.445015       1 config.go:328] "Starting node config controller"
	I0919 18:39:49.445038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:49.544520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:49.544894       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:39:49.545185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] <==
	W0919 18:39:36.759688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 18:39:36.759698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:36.759716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:36.759719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:39:36.759767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0919 18:39:36.759715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.577548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:37.577594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.591157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:37.591194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.662233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:37.662283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:39:37.691889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:39:37.691945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.788039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:37.788093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.902881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:37.902929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.943554       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:37.943606       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:37.964311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:37.964357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:40.957211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:51:10 addons-685250 kubelet[1619]: I0919 18:51:10.801613    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92" (UID: "c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 19 18:51:10 addons-685250 kubelet[1619]: I0919 18:51:10.803495    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92-kube-api-access-nlhb2" (OuterVolumeSpecName: "kube-api-access-nlhb2") pod "c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92" (UID: "c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92"). InnerVolumeSpecName "kube-api-access-nlhb2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:10 addons-685250 kubelet[1619]: I0919 18:51:10.902637    1619 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nlhb2\" (UniqueName: \"kubernetes.io/projected/c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92-kube-api-access-nlhb2\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:51:10 addons-685250 kubelet[1619]: I0919 18:51:10.902679    1619 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92-gcp-creds\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.003210    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jg88\" (UniqueName: \"kubernetes.io/projected/fc0b3544-d729-4e33-a260-ef1ab277d08f-kube-api-access-5jg88\") pod \"fc0b3544-d729-4e33-a260-ef1ab277d08f\" (UID: \"fc0b3544-d729-4e33-a260-ef1ab277d08f\") "
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.003256    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhfr\" (UniqueName: \"kubernetes.io/projected/bdd1e643-0c83-4fed-a147-0dd79f789e29-kube-api-access-lqhfr\") pod \"bdd1e643-0c83-4fed-a147-0dd79f789e29\" (UID: \"bdd1e643-0c83-4fed-a147-0dd79f789e29\") "
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.005189    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc0b3544-d729-4e33-a260-ef1ab277d08f-kube-api-access-5jg88" (OuterVolumeSpecName: "kube-api-access-5jg88") pod "fc0b3544-d729-4e33-a260-ef1ab277d08f" (UID: "fc0b3544-d729-4e33-a260-ef1ab277d08f"). InnerVolumeSpecName "kube-api-access-5jg88". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.005302    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd1e643-0c83-4fed-a147-0dd79f789e29-kube-api-access-lqhfr" (OuterVolumeSpecName: "kube-api-access-lqhfr") pod "bdd1e643-0c83-4fed-a147-0dd79f789e29" (UID: "bdd1e643-0c83-4fed-a147-0dd79f789e29"). InnerVolumeSpecName "kube-api-access-lqhfr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.103640    1619 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5jg88\" (UniqueName: \"kubernetes.io/projected/fc0b3544-d729-4e33-a260-ef1ab277d08f-kube-api-access-5jg88\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.103683    1619 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lqhfr\" (UniqueName: \"kubernetes.io/projected/bdd1e643-0c83-4fed-a147-0dd79f789e29-kube-api-access-lqhfr\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.320547    1619 scope.go:117] "RemoveContainer" containerID="6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.337237    1619 scope.go:117] "RemoveContainer" containerID="6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: E0919 18:51:11.337643    1619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732\": container with ID starting with 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732 not found: ID does not exist" containerID="6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.337691    1619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732"} err="failed to get container status \"6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732\": rpc error: code = NotFound desc = could not find container \"6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732\": container with ID starting with 6072bdcdef66c425900e1e727a39acb7cb829d0058b81a0d9980c63e89cc3732 not found: ID does not exist"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.337722    1619 scope.go:117] "RemoveContainer" containerID="31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.355000    1619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd1e643-0c83-4fed-a147-0dd79f789e29" path="/var/lib/kubelet/pods/bdd1e643-0c83-4fed-a147-0dd79f789e29/volumes"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.355546    1619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7b5016-e797-4f70-9fd9-29d297a1a8c1" path="/var/lib/kubelet/pods/dc7b5016-e797-4f70-9fd9-29d297a1a8c1/volumes"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.355838    1619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc0b3544-d729-4e33-a260-ef1ab277d08f" path="/var/lib/kubelet/pods/fc0b3544-d729-4e33-a260-ef1ab277d08f/volumes"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.356555    1619 scope.go:117] "RemoveContainer" containerID="31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: E0919 18:51:11.356920    1619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6\": container with ID starting with 31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6 not found: ID does not exist" containerID="31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.356953    1619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6"} err="failed to get container status \"31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6\": rpc error: code = NotFound desc = could not find container \"31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6\": container with ID starting with 31d0bd8822f7dcea585559863e6c965b1d482c50e606f527585e85c2a6c96fa6 not found: ID does not exist"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.356981    1619 scope.go:117] "RemoveContainer" containerID="3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.373436    1619 scope.go:117] "RemoveContainer" containerID="3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: E0919 18:51:11.373872    1619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5\": container with ID starting with 3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5 not found: ID does not exist" containerID="3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5"
	Sep 19 18:51:11 addons-685250 kubelet[1619]: I0919 18:51:11.373909    1619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5"} err="failed to get container status \"3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5\": rpc error: code = NotFound desc = could not find container \"3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5\": container with ID starting with 3a91e259f62be849d7537c3da094910a11274c2bce5bb7fd891cfa74f37133a5 not found: ID does not exist"
	
	
	==> storage-provisioner [c265d33c64155de4fde21bb6eae221bdd5a2524b7a15aa0b673f23ce4f17b12d] <==
	I0919 18:40:29.640679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:29.648412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:29.648464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:29.655439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:29.655525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3690d0-7216-4b96-a260-4e04cffeb393", APIVersion:"v1", ResourceVersion:"963", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685250_e66922b4-9073-4377-9148-47e4da8ece38 became leader
	I0919 18:40:29.655628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	I0919 18:40:29.756484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-685250 describe pod busybox task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-685250 describe pod busybox task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1 (73.305028ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:41:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbctc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbctc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-685250
	  Normal   Pulling    7m52s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m52s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m52s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m41s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m7s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:50:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzftq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mzftq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age               From               Message
	  ----     ------     ----              ----               -------
	  Normal   Scheduled  66s               default-scheduler  Successfully assigned default/task-pv-pod to addons-685250
	  Warning  Failed     20s               kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     20s               kubelet            Error: ErrImagePull
	  Normal   BackOff    19s               kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     19s               kubelet            Error: ImagePullBackOff
	  Normal   Pulling    6s (x2 over 66s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqqsb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zkk9z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-685250 describe pod busybox task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.05s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (482.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-685250 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-685250 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-685250 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ebd6539d-2dc6-46b7-8766-cd26ce5e6547] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-09-19 18:59:42.697625409 +0000 UTC m=+1252.600585239
addons_test.go:252: (dbg) Run:  kubectl --context addons-685250 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-685250 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-685250/192.168.49.2
Start Time:       Thu, 19 Sep 2024 18:51:42 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8nj8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-w8nj8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m                     default-scheduler  Successfully assigned default/nginx to addons-685250
Warning  Failed     4m33s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    3m42s (x4 over 8m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     3m11s (x3 over 7m29s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     3m11s (x4 over 7m29s)  kubelet            Error: ErrImagePull
Normal   BackOff    2m48s (x7 over 7m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m48s (x7 over 7m29s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-685250 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-685250 logs nginx -n default: exit status 1 (68.724789ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-685250 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-685250
helpers_test.go:235: (dbg) docker inspect addons-685250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf",
	        "Created": "2024-09-19T18:39:26.544485958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 762128,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:26.653035442Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hosts",
	        "LogPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf-json.log",
	        "Name": "/addons-685250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-685250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-685250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-685250",
	                "Source": "/var/lib/docker/volumes/addons-685250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-685250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-685250",
	                "name.minikube.sigs.k8s.io": "addons-685250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1b0ccece079b2c012374acf46f9c349cae0c8bd9ae1a208e2d0acc049d21c7cb",
	            "SandboxKey": "/var/run/docker/netns/1b0ccece079b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-685250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3c159902c31cb41244d3423728e25a3f29e7e8e24a95c6da692d29e053f66798",
	                    "EndpointID": "51640df6c09057e35d4d5a9f04688e387f2981906971ee1afa85b24730ac60a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-685250",
	                        "cdadbc576653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685250 -n addons-685250
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 logs -n 25: (1.229101922s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-759185                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | download-docker-985684                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-985684                                                                   | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | binary-mirror-515604                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32895                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-515604                                                                     | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-685250 --wait=true                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-685250 ssh cat                                                                       | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | /opt/local-path-provisioner/pvc-83c31ed0-fc42-4249-94b0-a7e77464cc71_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-685250 ip                                                                            | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:03.200212  761388 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:03.200467  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200476  761388 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:03.200481  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200718  761388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 18:39:03.201426  761388 out.go:352] Setting JSON to false
	I0919 18:39:03.202398  761388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12093,"bootTime":1726759050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:03.202515  761388 start.go:139] virtualization: kvm guest
	I0919 18:39:03.204903  761388 out.go:177] * [addons-685250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:39:03.206237  761388 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:39:03.206258  761388 notify.go:220] Checking for updates...
	I0919 18:39:03.208919  761388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:03.210261  761388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:03.211535  761388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:39:03.212802  761388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:39:03.213964  761388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:39:03.215359  761388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:03.237406  761388 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:03.237534  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.283495  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.274719559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.283600  761388 docker.go:318] overlay module found
	I0919 18:39:03.286271  761388 out.go:177] * Using the docker driver based on user configuration
	I0919 18:39:03.287521  761388 start.go:297] selected driver: docker
	I0919 18:39:03.287534  761388 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:03.287545  761388 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:39:03.288361  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.333412  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.324780201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.333593  761388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:03.333839  761388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:39:03.335585  761388 out.go:177] * Using Docker driver with root privileges
	I0919 18:39:03.336930  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:03.336986  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:03.336997  761388 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:03.337090  761388 start.go:340] cluster config:
	{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:03.338526  761388 out.go:177] * Starting "addons-685250" primary control-plane node in "addons-685250" cluster
	I0919 18:39:03.339809  761388 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:03.340995  761388 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:03.342026  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:03.342057  761388 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:03.342055  761388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:03.342063  761388 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:03.342182  761388 preload.go:172] Found /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:39:03.342194  761388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:39:03.342520  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:03.342542  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json: {Name:mk74efcccadcff6ea4a0787d2832be4be3984d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:03.359223  761388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:03.359412  761388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:03.359431  761388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:03.359435  761388 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:03.359442  761388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:03.359450  761388 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:14.708408  761388 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:14.708455  761388 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:14.708519  761388 start.go:360] acquireMachinesLock for addons-685250: {Name:mk56c74bc959dec1fb8992b737e0e35c0cd4ad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:14.708642  761388 start.go:364] duration metric: took 84.107µs to acquireMachinesLock for "addons-685250"
	I0919 18:39:14.708671  761388 start.go:93] Provisioning new machine with config: &{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:14.708780  761388 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:14.710766  761388 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:14.711013  761388 start.go:159] libmachine.API.Create for "addons-685250" (driver="docker")
	I0919 18:39:14.711068  761388 client.go:168] LocalClient.Create starting
	I0919 18:39:14.711150  761388 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem
	I0919 18:39:14.824308  761388 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem
	I0919 18:39:15.025789  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:15.041206  761388 cli_runner.go:211] docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:15.041292  761388 network_create.go:284] running [docker network inspect addons-685250] to gather additional debugging logs...
	I0919 18:39:15.041313  761388 cli_runner.go:164] Run: docker network inspect addons-685250
	W0919 18:39:15.056441  761388 cli_runner.go:211] docker network inspect addons-685250 returned with exit code 1
	I0919 18:39:15.056478  761388 network_create.go:287] error running [docker network inspect addons-685250]: docker network inspect addons-685250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-685250 not found
	I0919 18:39:15.056490  761388 network_create.go:289] output of [docker network inspect addons-685250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-685250 not found
	
	** /stderr **
	I0919 18:39:15.056606  761388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:15.072776  761388 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001446920}
	I0919 18:39:15.072824  761388 network_create.go:124] attempt to create docker network addons-685250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:15.072890  761388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-685250 addons-685250
	I0919 18:39:15.132522  761388 network_create.go:108] docker network addons-685250 192.168.49.0/24 created
	I0919 18:39:15.132554  761388 kic.go:121] calculated static IP "192.168.49.2" for the "addons-685250" container
	I0919 18:39:15.132644  761388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:15.147671  761388 cli_runner.go:164] Run: docker volume create addons-685250 --label name.minikube.sigs.k8s.io=addons-685250 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:15.163961  761388 oci.go:103] Successfully created a docker volume addons-685250
	I0919 18:39:15.164048  761388 cli_runner.go:164] Run: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:22.072772  761388 cli_runner.go:217] Completed: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (6.908674607s)
	I0919 18:39:22.072803  761388 oci.go:107] Successfully prepared a docker volume addons-685250
	I0919 18:39:22.072836  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:22.072868  761388 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:22.072944  761388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:26.483616  761388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41062526s)
	I0919 18:39:26.483649  761388 kic.go:203] duration metric: took 4.410778812s to extract preloaded images to volume ...
	W0919 18:39:26.483780  761388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:26.483868  761388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:26.529192  761388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-685250 --name addons-685250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-685250 --network addons-685250 --ip 192.168.49.2 --volume addons-685250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:26.802037  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Running}}
	I0919 18:39:26.820911  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:26.839572  761388 cli_runner.go:164] Run: docker exec addons-685250 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:26.880131  761388 oci.go:144] the created container "addons-685250" has a running status.
	I0919 18:39:26.880165  761388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa...
	I0919 18:39:27.339670  761388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:27.361758  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.379045  761388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:27.379068  761388 kic_runner.go:114] Args: [docker exec --privileged addons-685250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:27.421090  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.437982  761388 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:27.438079  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.456233  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.456524  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.456542  761388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:27.594819  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.594862  761388 ubuntu.go:169] provisioning hostname "addons-685250"
	I0919 18:39:27.594952  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.613368  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.613592  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.613622  761388 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685250 && echo "addons-685250" | sudo tee /etc/hostname
	I0919 18:39:27.754187  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.754262  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.771895  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.772132  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.772152  761388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:27.903239  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:27.903269  761388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-753213/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-753213/.minikube}
	I0919 18:39:27.903324  761388 ubuntu.go:177] setting up certificates
	I0919 18:39:27.903341  761388 provision.go:84] configureAuth start
	I0919 18:39:27.903404  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:27.919357  761388 provision.go:143] copyHostCerts
	I0919 18:39:27.919427  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/key.pem (1679 bytes)
	I0919 18:39:27.919543  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/ca.pem (1082 bytes)
	I0919 18:39:27.919618  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/cert.pem (1123 bytes)
	I0919 18:39:27.919681  761388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem org=jenkins.addons-685250 san=[127.0.0.1 192.168.49.2 addons-685250 localhost minikube]
	I0919 18:39:28.160212  761388 provision.go:177] copyRemoteCerts
	I0919 18:39:28.160283  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:28.160320  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.177005  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.271718  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:39:28.293331  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:28.314500  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:28.335572  761388 provision.go:87] duration metric: took 432.21249ms to configureAuth
	I0919 18:39:28.335604  761388 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:28.335790  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:28.335896  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.352244  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:28.352438  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:28.352454  761388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:39:28.570762  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:39:28.570788  761388 machine.go:96] duration metric: took 1.132783666s to provisionDockerMachine
	I0919 18:39:28.570801  761388 client.go:171] duration metric: took 13.859723313s to LocalClient.Create
	I0919 18:39:28.570823  761388 start.go:167] duration metric: took 13.859810827s to libmachine.API.Create "addons-685250"
	I0919 18:39:28.570832  761388 start.go:293] postStartSetup for "addons-685250" (driver="docker")
	I0919 18:39:28.570846  761388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:28.570928  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:28.570969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.587920  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.684315  761388 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:28.687444  761388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:28.687482  761388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:28.687493  761388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:28.687502  761388 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:28.687516  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/addons for local assets ...
	I0919 18:39:28.687596  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/files for local assets ...
	I0919 18:39:28.687629  761388 start.go:296] duration metric: took 116.788714ms for postStartSetup
	I0919 18:39:28.687939  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.704801  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:28.705071  761388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:28.705124  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.721672  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.816217  761388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:28.820354  761388 start.go:128] duration metric: took 14.111556683s to createHost
	I0919 18:39:28.820377  761388 start.go:83] releasing machines lock for "addons-685250", held for 14.111720986s
	I0919 18:39:28.820433  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.837043  761388 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:28.837093  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.837137  761388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:28.837212  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.853306  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.853640  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:29.015641  761388 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:29.019690  761388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:39:29.156274  761388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:29.160605  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.178821  761388 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:29.178900  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.204313  761388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:29.204337  761388 start.go:495] detecting cgroup driver to use...
	I0919 18:39:29.204370  761388 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:29.204409  761388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:39:29.218099  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:39:29.228094  761388 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:39:29.228158  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:39:29.240433  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:39:29.253142  761388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:39:29.326278  761388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:39:29.406802  761388 docker.go:233] disabling docker service ...
	I0919 18:39:29.406859  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:39:29.424951  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:39:29.435168  761388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:39:29.514566  761388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:39:29.591355  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:39:29.601869  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:29.616535  761388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:39:29.616600  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.625293  761388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:39:29.625347  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.634150  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.642705  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.651092  761388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:29.659117  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.667830  761388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.681755  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.690617  761388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:29.698112  761388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:29.705724  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:29.785529  761388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:39:29.878210  761388 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:39:29.878295  761388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:39:29.881824  761388 start.go:563] Will wait 60s for crictl version
	I0919 18:39:29.881889  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:39:29.884918  761388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:29.918116  761388 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:39:29.918200  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.952309  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.988286  761388 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:39:29.989606  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:30.005833  761388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:30.009469  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.020164  761388 kubeadm.go:883] updating cluster {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:30.020281  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:30.020325  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.083858  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.083879  761388 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:39:30.083926  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.116167  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.116190  761388 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:30.116199  761388 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:39:30.116364  761388 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-685250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:30.116428  761388 ssh_runner.go:195] Run: crio config
	I0919 18:39:30.156650  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:30.156675  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:30.156688  761388 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:30.156711  761388 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685250 NodeName:addons-685250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:30.156845  761388 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:30.156908  761388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:30.165387  761388 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:30.165448  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:30.173207  761388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:39:30.188946  761388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:30.205638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:39:30.222877  761388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:30.226085  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.236096  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:30.319405  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:30.332104  761388 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250 for IP: 192.168.49.2
	I0919 18:39:30.332125  761388 certs.go:194] generating shared ca certs ...
	I0919 18:39:30.332140  761388 certs.go:226] acquiring lock for ca certs: {Name:mkac4e621bd7a8886df3f6838bd34b99172c371a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.332275  761388 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key
	I0919 18:39:30.528690  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt ...
	I0919 18:39:30.528724  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt: {Name:mked4ee6d8831516d03c840d59935532e3f21cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.528941  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key ...
	I0919 18:39:30.528958  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key: {Name:mkcb02ba3f86d66b352caba2841d6dd380f76edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.529067  761388 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key
	I0919 18:39:30.624034  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt ...
	I0919 18:39:30.624068  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt: {Name:mkaa7904f1d229a9140b6f62d1d672cf00a2f2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624277  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key ...
	I0919 18:39:30.624295  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key: {Name:mkb6bb0d0409e9bd1f254506994f2a2447e5cc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624398  761388 certs.go:256] generating profile certs ...
	I0919 18:39:30.624464  761388 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key
	I0919 18:39:30.624490  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt with IP's: []
	I0919 18:39:30.752151  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt ...
	I0919 18:39:30.752185  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: {Name:mk69a3ec8793b5371f583f88b2bebacea2af07ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752390  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key ...
	I0919 18:39:30.752406  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key: {Name:mk7d143fc1d3dd645310e55acf6f951beafc9848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752506  761388 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966
	I0919 18:39:30.752526  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:30.915660  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 ...
	I0919 18:39:30.915697  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966: {Name:mkdb41eb017de5d424bda2067b62b8ceafaf07c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.915911  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 ...
	I0919 18:39:30.915931  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966: {Name:mkbc3d5e5a7473c69994a57b2f0a8b8707ffe9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.916041  761388 certs.go:381] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt
	I0919 18:39:30.916130  761388 certs.go:385] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key
	I0919 18:39:30.916176  761388 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key
	I0919 18:39:30.916195  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt with IP's: []
	I0919 18:39:31.094514  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt ...
	I0919 18:39:31.094599  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt: {Name:mk9dc2f777ee8d63ffc9f5a10453c45f6382bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094776  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key ...
	I0919 18:39:31.094791  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key: {Name:mk32678ed11fe18054a48114b5283e466fb989c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094999  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:31.095055  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:39:31.095092  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:31.095124  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem (1679 bytes)
	I0919 18:39:31.095878  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:31.120600  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:31.142506  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:31.164187  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:31.185942  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:31.207396  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:31.229449  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:31.250877  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:39:31.272098  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:31.293403  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:31.308896  761388 ssh_runner.go:195] Run: openssl version
	I0919 18:39:31.314017  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:31.322554  761388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325634  761388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325693  761388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.331892  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:31.340220  761388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:31.343178  761388 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:31.343230  761388 kubeadm.go:392] StartCluster: {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:31.343328  761388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:39:31.343377  761388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:39:31.376569  761388 cri.go:89] found id: ""
	I0919 18:39:31.376645  761388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:31.384955  761388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:31.393013  761388 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:31.393065  761388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:31.400980  761388 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:31.400998  761388 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:31.401035  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:31.408813  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:31.408861  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:31.416662  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:31.424342  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:31.424386  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:31.431658  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.438947  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:31.438996  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.445986  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:31.453391  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:31.453444  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:31.460734  761388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:31.495835  761388 kubeadm.go:310] W0919 18:39:31.495183    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.496393  761388 kubeadm.go:310] W0919 18:39:31.495823    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.513844  761388 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0919 18:39:31.563421  761388 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:39:40.033093  761388 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:40.033184  761388 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:40.033278  761388 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:40.033324  761388 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0919 18:39:40.033356  761388 kubeadm.go:310] OS: Linux
	I0919 18:39:40.033398  761388 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:40.033437  761388 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:40.033482  761388 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:40.033521  761388 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:40.033566  761388 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:40.033607  761388 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:40.033655  761388 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:40.033699  761388 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:40.033736  761388 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:40.033793  761388 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:40.033891  761388 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:40.034008  761388 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:40.034100  761388 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:40.035787  761388 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:40.035950  761388 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:40.036208  761388 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:40.036312  761388 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:40.036391  761388 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:40.036476  761388 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:40.036548  761388 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:40.036641  761388 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:40.036746  761388 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.036794  761388 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:40.036940  761388 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.037024  761388 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:40.037075  761388 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:40.037112  761388 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:40.037161  761388 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:40.037201  761388 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:40.037258  761388 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:40.037338  761388 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:40.037448  761388 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:40.037533  761388 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:40.037626  761388 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:40.037718  761388 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:40.039316  761388 out.go:235]   - Booting up control plane ...
	I0919 18:39:40.039415  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:40.039524  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:40.039619  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:40.039728  761388 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:40.039841  761388 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:40.039909  761388 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:40.040093  761388 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:40.040237  761388 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:40.040290  761388 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.645723ms
	I0919 18:39:40.040356  761388 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:39:40.040404  761388 kubeadm.go:310] [api-check] The API server is healthy after 4.502008624s
	I0919 18:39:40.040492  761388 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:39:40.040605  761388 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:39:40.040687  761388 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:39:40.040875  761388 kubeadm.go:310] [mark-control-plane] Marking the node addons-685250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:39:40.040960  761388 kubeadm.go:310] [bootstrap-token] Using token: ijm4ly.86nu9uivdcvgfqko
	I0919 18:39:40.042478  761388 out.go:235]   - Configuring RBAC rules ...
	I0919 18:39:40.042563  761388 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:39:40.042634  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:39:40.042751  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:39:40.042898  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:39:40.043013  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:39:40.043111  761388 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:39:40.043261  761388 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:39:40.043324  761388 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:39:40.043388  761388 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:39:40.043398  761388 kubeadm.go:310] 
	I0919 18:39:40.043485  761388 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:39:40.043499  761388 kubeadm.go:310] 
	I0919 18:39:40.043591  761388 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:39:40.043599  761388 kubeadm.go:310] 
	I0919 18:39:40.043634  761388 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:39:40.043719  761388 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:39:40.043765  761388 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:39:40.043770  761388 kubeadm.go:310] 
	I0919 18:39:40.043812  761388 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:39:40.043817  761388 kubeadm.go:310] 
	I0919 18:39:40.043857  761388 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:39:40.043862  761388 kubeadm.go:310] 
	I0919 18:39:40.043902  761388 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:39:40.043999  761388 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:39:40.044089  761388 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:39:40.044096  761388 kubeadm.go:310] 
	I0919 18:39:40.044175  761388 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:39:40.044258  761388 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:39:40.044266  761388 kubeadm.go:310] 
	I0919 18:39:40.044382  761388 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044505  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 \
	I0919 18:39:40.044525  761388 kubeadm.go:310] 	--control-plane 
	I0919 18:39:40.044531  761388 kubeadm.go:310] 
	I0919 18:39:40.044599  761388 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:39:40.044606  761388 kubeadm.go:310] 
	I0919 18:39:40.044684  761388 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044851  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 
	I0919 18:39:40.044867  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:40.044876  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:40.046449  761388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:39:40.047787  761388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:39:40.051623  761388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:39:40.051638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:39:40.069179  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:39:40.264712  761388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:39:40.264794  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.264800  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685250 minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-685250 minikube.k8s.io/primary=true
	I0919 18:39:40.272124  761388 ops.go:34] apiserver oom_adj: -16
	I0919 18:39:40.450150  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.950813  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.450429  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.950463  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.450542  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.950992  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.451199  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.950242  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:44.012691  761388 kubeadm.go:1113] duration metric: took 3.747963897s to wait for elevateKubeSystemPrivileges
	I0919 18:39:44.012729  761388 kubeadm.go:394] duration metric: took 12.669506054s to StartCluster
	I0919 18:39:44.012758  761388 settings.go:142] acquiring lock: {Name:mkba96297ae0a710684a3a2a45be357ed7205f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.012903  761388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:44.013318  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/kubeconfig: {Name:mk7bd3287a61595c1c20478c3038a77f636ffaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.013536  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:39:44.013566  761388 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:44.013636  761388 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:39:44.013758  761388 addons.go:69] Setting yakd=true in profile "addons-685250"
	I0919 18:39:44.013778  761388 addons.go:69] Setting helm-tiller=true in profile "addons-685250"
	I0919 18:39:44.013797  761388 addons.go:69] Setting registry=true in profile "addons-685250"
	I0919 18:39:44.013801  761388 addons.go:69] Setting ingress=true in profile "addons-685250"
	I0919 18:39:44.013794  761388 addons.go:69] Setting metrics-server=true in profile "addons-685250"
	I0919 18:39:44.013782  761388 addons.go:234] Setting addon yakd=true in "addons-685250"
	I0919 18:39:44.013816  761388 addons.go:234] Setting addon ingress=true in "addons-685250"
	I0919 18:39:44.013818  761388 addons.go:69] Setting storage-provisioner=true in profile "addons-685250"
	I0919 18:39:44.013824  761388 addons.go:234] Setting addon metrics-server=true in "addons-685250"
	I0919 18:39:44.013824  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013835  761388 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685250"
	I0919 18:39:44.013850  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013852  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685250"
	I0919 18:39:44.013855  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013828  761388 addons.go:234] Setting addon storage-provisioner=true in "addons-685250"
	I0919 18:39:44.013859  761388 addons.go:69] Setting ingress-dns=true in profile "addons-685250"
	I0919 18:39:44.013875  761388 addons.go:69] Setting inspektor-gadget=true in profile "addons-685250"
	I0919 18:39:44.013891  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013904  761388 addons.go:69] Setting default-storageclass=true in profile "addons-685250"
	I0919 18:39:44.013905  761388 addons.go:69] Setting gcp-auth=true in profile "addons-685250"
	I0919 18:39:44.013920  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685250"
	I0919 18:39:44.013928  761388 mustload.go:65] Loading cluster: addons-685250
	I0919 18:39:44.013810  761388 addons.go:234] Setting addon helm-tiller=true in "addons-685250"
	I0919 18:39:44.013987  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014106  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013760  761388 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685250"
	I0919 18:39:44.014180  761388 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:44.014213  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014234  761388 addons.go:69] Setting volcano=true in profile "addons-685250"
	I0919 18:39:44.014289  761388 addons.go:234] Setting addon volcano=true in "addons-685250"
	I0919 18:39:44.014321  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014369  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014420  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014444  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014529  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014668  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014766  761388 addons.go:69] Setting volumesnapshots=true in profile "addons-685250"
	I0919 18:39:44.014784  761388 addons.go:234] Setting addon volumesnapshots=true in "addons-685250"
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014811  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014813  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013790  761388 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685250"
	I0919 18:39:44.014885  761388 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685250"
	I0919 18:39:44.014921  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013892  761388 addons.go:234] Setting addon ingress-dns=true in "addons-685250"
	I0919 18:39:44.015381  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.015478  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013782  761388 addons.go:69] Setting cloud-spanner=true in profile "addons-685250"
	I0919 18:39:44.015604  761388 addons.go:234] Setting addon cloud-spanner=true in "addons-685250"
	I0919 18:39:44.015632  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013894  761388 addons.go:234] Setting addon inspektor-gadget=true in "addons-685250"
	I0919 18:39:44.015698  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.016016  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016089  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.015481  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016191  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013861  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.017759  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.020298  761388 out.go:177] * Verifying Kubernetes components...
	I0919 18:39:44.015297  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013811  761388 addons.go:234] Setting addon registry=true in "addons-685250"
	I0919 18:39:44.026436  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.028211  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:44.037105  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.048567  761388 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:39:44.048657  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.050374  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:39:44.050397  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:39:44.050461  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.052343  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:39:44.060733  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.062707  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.062730  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:39:44.062789  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.081544  761388 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:39:44.081631  761388 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:39:44.083278  761388 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.083339  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:39:44.083408  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.086304  761388 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:39:44.086735  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:39:44.088743  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:39:44.088872  761388 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:39:44.091114  761388 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685250"
	I0919 18:39:44.091164  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.091489  761388 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:39:44.091508  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:39:44.091564  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.091649  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.091952  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.092800  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:39:44.092818  761388 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:39:44.092889  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.094032  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:39:44.101275  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:39:44.103871  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:39:44.106750  761388 addons.go:234] Setting addon default-storageclass=true in "addons-685250"
	I0919 18:39:44.106804  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.107282  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	W0919 18:39:44.109675  761388 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:39:44.110326  761388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:39:44.110334  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:39:44.112386  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.112408  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:39:44.112472  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.112565  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:39:44.113382  761388 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:39:44.114898  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:39:44.114906  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:39:44.114925  761388 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:39:44.114984  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.116662  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:39:44.116682  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:39:44.116748  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.119259  761388 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:39:44.120516  761388 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:39:44.120540  761388 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:39:44.120610  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.123773  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.136078  761388 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:39:44.138681  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.138709  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:39:44.138773  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.144207  761388 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:39:44.145527  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.145578  761388 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:39:44.146995  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:44.147017  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:39:44.147076  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.152809  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.156308  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:39:44.157886  761388 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:39:44.157903  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:39:44.157925  761388 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:39:44.157985  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.162886  761388 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.162909  761388 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:39:44.162966  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.163450  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.166881  761388 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:44.166906  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:39:44.166969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.172034  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.180781  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:39:44.183673  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.189557  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.190040  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.198542  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.202993  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.203703  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.205321  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.208823  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.209666  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	W0919 18:39:44.241755  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241799  761388 retry.go:31] will retry after 368.513545ms: ssh: handshake failed: EOF
	W0919 18:39:44.241901  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241912  761388 retry.go:31] will retry after 353.358743ms: ssh: handshake failed: EOF
	W0919 18:39:44.241992  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.242019  761388 retry.go:31] will retry after 239.291473ms: ssh: handshake failed: EOF
	I0919 18:39:44.351392  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:44.437649  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.536099  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.541975  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:39:44.542004  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:39:44.544666  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.646013  761388 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:39:44.646047  761388 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:39:44.743483  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.743812  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:39:44.743879  761388 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:39:44.839790  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:39:44.839821  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:39:44.840867  761388 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:39:44.840892  761388 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:39:44.844891  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:44.844913  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:39:44.859724  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:39:44.859754  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:39:44.945601  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.948297  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:39:44.948369  761388 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:39:44.953207  761388 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:44.953285  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:39:45.049434  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:45.049642  761388 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:39:45.049698  761388 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:39:45.055848  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:39:45.055950  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:39:45.058998  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:39:45.059024  761388 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:39:45.141944  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:39:45.141986  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:39:45.156162  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:45.246810  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:39:45.246840  761388 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:39:45.256490  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:45.437813  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:45.441833  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.441871  761388 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:39:45.549176  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:39:45.549265  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:39:45.637502  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:39:45.637591  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:39:45.642826  761388 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2913856s)
	I0919 18:39:45.644038  761388 node_ready.go:35] waiting up to 6m0s for node "addons-685250" to be "Ready" ...
	I0919 18:39:45.644391  761388 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463571637s)
	I0919 18:39:45.644468  761388 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:39:45.647199  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.647259  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:39:45.737336  761388 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:39:45.737429  761388 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:39:45.754802  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:39:45.754834  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:39:45.836195  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:39:45.836236  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:39:45.851797  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.936024  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.956936  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:39:45.956972  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:39:46.159873  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:39:46.159908  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:39:46.337448  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:39:46.337478  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:39:46.356760  761388 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685250" context rescaled to 1 replicas
	I0919 18:39:46.436892  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:39:46.436928  761388 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:39:46.537037  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:39:46.537072  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:39:46.746236  761388 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:46.746266  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:39:46.854918  761388 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:39:46.855018  761388 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:39:46.946936  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:39:46.946983  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:39:47.236798  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:39:47.236841  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:39:47.246825  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:47.257114  761388 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.257149  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:39:47.453170  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.542740  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:39:47.542772  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:39:47.659810  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:47.759785  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:47.759819  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:39:47.957548  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:50.147172  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:50.150873  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713170158s)
	I0919 18:39:50.150919  761388 addons.go:475] Verifying addon ingress=true in "addons-685250"
	I0919 18:39:50.150938  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.614729552s)
	I0919 18:39:50.151045  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.606300895s)
	I0919 18:39:50.151091  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.407584065s)
	I0919 18:39:50.151204  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.205541455s)
	I0919 18:39:50.151283  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.101743958s)
	I0919 18:39:50.151334  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.995098572s)
	I0919 18:39:50.151399  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.89486624s)
	I0919 18:39:50.151505  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.713655603s)
	I0919 18:39:50.151528  761388 addons.go:475] Verifying addon registry=true in "addons-685250"
	I0919 18:39:50.151594  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.29976078s)
	I0919 18:39:50.151618  761388 addons.go:475] Verifying addon metrics-server=true in "addons-685250"
	I0919 18:39:50.151657  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.215596812s)
	I0919 18:39:50.152907  761388 out.go:177] * Verifying ingress addon...
	I0919 18:39:50.153936  761388 out.go:177] * Verifying registry addon...
	I0919 18:39:50.153951  761388 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685250 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:39:50.155824  761388 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:39:50.157505  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0919 18:39:50.163513  761388 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:39:50.238665  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:39:50.238695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.238959  761388 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:39:50.238987  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.660404  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.662046  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.877367  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.630488674s)
	W0919 18:39:50.877434  761388 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877461  761388 retry.go:31] will retry after 374.811419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877563  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.424342572s)
	I0919 18:39:51.159983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.160342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.251656  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.294045721s)
	I0919 18:39:51.251706  761388 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:51.252726  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:51.253330  761388 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:39:51.255845  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:39:51.260109  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:39:51.260134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:51.299405  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:39:51.299470  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.319259  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.435849  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:39:51.455177  761388 addons.go:234] Setting addon gcp-auth=true in "addons-685250"
	I0919 18:39:51.455235  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:51.455622  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:51.473709  761388 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:39:51.473768  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.492852  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.660242  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.660451  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.763672  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.148125  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:52.160486  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.160637  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.260177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.659866  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.759357  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.159414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.160699  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.260412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.660465  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.660995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.760079  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.036339  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783560208s)
	I0919 18:39:54.036401  761388 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.56265651s)
	I0919 18:39:54.037930  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:54.039158  761388 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:39:54.040281  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:39:54.040295  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:39:54.060953  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:39:54.060982  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:39:54.078061  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.078081  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:39:54.096196  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.159825  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.161174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.259118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.649396  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:54.664552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.666437  761388 addons.go:475] Verifying addon gcp-auth=true in "addons-685250"
	I0919 18:39:54.666458  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.669012  761388 out.go:177] * Verifying gcp-auth addon...
	I0919 18:39:54.671405  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:39:54.762155  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.762165  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:39:54.762193  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.159689  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.161131  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.174401  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.259291  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.659983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.758821  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.159552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.161022  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.174326  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.259237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.660149  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.660452  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.675011  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.759761  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.147230  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:57.160802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.160843  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.174625  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.259483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.659641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.660974  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.674433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.759804  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.159364  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.160396  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.175074  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.258973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.659663  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.659995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.674333  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.759220  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.159931  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.160111  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.174241  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.259030  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.647936  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:59.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.674569  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.759432  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.160240  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.160488  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.174961  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.259892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.660179  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.660554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.675141  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.758994  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.174593  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.259801  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.659777  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.660892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.674204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.759169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.147887  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:02.160172  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.160247  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.174624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.259598  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.659674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.660694  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.674100  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.759727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.159593  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.160617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.174020  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.259297  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.660462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.660957  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.674094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.759774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.159328  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.160575  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.174927  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.259749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.647664  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:04.659478  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.661089  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.759138  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.160148  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.160420  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.174732  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.259905  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.659969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.660156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.674731  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.759280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.160047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.160189  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.174412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.259142  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.660052  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.660419  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.674781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.759973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.147840  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:07.159737  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.160196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.174616  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.259365  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.659184  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.660781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.674067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.758888  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.160134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.160271  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.174692  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.259835  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.659150  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.660428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.674754  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.759483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.159321  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.160653  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.175114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.260634  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.647196  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:09.659462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.660545  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.674993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.759810  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.159952  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.161096  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.174611  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.259487  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.659118  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.660327  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.674867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.759802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.159342  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.160885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.173987  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.259734  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.647819  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:11.659862  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.660211  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.674274  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.759168  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.160283  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.160439  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.175052  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.260097  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.659816  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.660819  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.674404  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.759164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.160264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.160357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.174537  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.259736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.660466  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.660513  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.674991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.759495  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.146772  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:14.159525  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.159867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.174094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.260124  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.660152  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.660362  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.674852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.759444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.159996  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.160894  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.174310  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.259417  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.659374  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.660883  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.674695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.759222  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.147487  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:16.159970  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.160975  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.174207  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.258997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.660164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.660247  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.674461  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.759434  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.160167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.160211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.658940  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.660444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.674638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.759422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.159603  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.160463  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.174991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.258926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.647877  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:18.660091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.660270  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.759470  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.160102  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.160359  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.174708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.259350  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.659690  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.660560  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.673993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.759643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.159760  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.160739  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.174018  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.259759  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.659618  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.660617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.673972  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.759708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.147628  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:21.159869  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.161165  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.174520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.259323  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.659211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.660585  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.673760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.159736  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.160153  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.174301  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.259002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.659694  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.661106  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.674760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.759413  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.159284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.160467  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.174960  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.259223  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.647843  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:23.659948  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.659983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.674196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.758885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.159695  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.160775  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.174128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.260104  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.660632  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.661828  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.674068  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.759900  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.159730  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.160014  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.174822  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.259570  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.659440  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.660392  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.674818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.759718  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.147606  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:26.159628  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.161042  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.174701  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.259645  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.661426  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.662087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.674503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.759217  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.159812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.160262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.174635  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.259405  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.659575  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.660727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.674227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.759021  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.147837  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:28.160082  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.160114  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.174316  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.646812  761388 node_ready.go:49] node "addons-685250" has status "Ready":"True"
	I0919 18:40:28.646840  761388 node_ready.go:38] duration metric: took 43.002724586s for node "addons-685250" to be "Ready" ...
	I0919 18:40:28.646862  761388 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:28.657370  761388 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:28.665479  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:28.665601  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.666301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.673925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.761809  761388 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:28.761844  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.160890  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.161414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.174200  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.262793  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.666949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.668214  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.673941  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.760517  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.160901  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.165455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.238277  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.261435  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.665010  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.665243  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.740441  761388 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.740475  761388 pod_ready.go:82] duration metric: took 2.083070651s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740502  761388 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.749009  761388 pod_ready.go:93] pod "etcd-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.749034  761388 pod_ready.go:82] duration metric: took 8.524276ms for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.749051  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755475  761388 pod_ready.go:93] pod "kube-apiserver-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.755499  761388 pod_ready.go:82] duration metric: took 6.439358ms for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755513  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837071  761388 pod_ready.go:93] pod "kube-controller-manager-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.837158  761388 pod_ready.go:82] duration metric: took 81.634686ms for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837180  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.842181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.843110  761388 pod_ready.go:93] pod "kube-proxy-tt5h8" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.843130  761388 pod_ready.go:82] duration metric: took 5.940025ms for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.843141  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064216  761388 pod_ready.go:93] pod "kube-scheduler-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:31.064250  761388 pod_ready.go:82] duration metric: took 221.10192ms for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064264  761388 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.160309  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.161868  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.175154  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.261445  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.661945  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.662739  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.674262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.764171  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.160964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.161120  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.175453  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.261255  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.660913  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.661774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.675133  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.760592  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.070854  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:33.161051  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.161301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.175286  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.260865  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.660702  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.661852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.675273  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.760668  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.160546  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.161086  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.174285  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.260753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.661118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.661516  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.675418  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.760922  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.071857  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:35.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.160768  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.175281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.260345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.660487  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.661415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.674901  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.760686  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.160095  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.161029  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.174515  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.260186  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.660284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.661541  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.674751  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.760998  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.160677  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.160812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.174659  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.260012  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.569850  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:37.660726  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.661114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.674871  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.762472  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.160011  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.161167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.236912  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.261156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.660760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.661073  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.675428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.760681  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.160674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.161278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.174402  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.259952  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.570471  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:39.660746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.661314  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.675826  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.760609  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.160453  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.161002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.175034  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.261000  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.660533  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.661321  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.760519  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.160473  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.161342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.174400  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.259949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.570843  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:41.660891  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.661331  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.675442  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.761658  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.159681  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.161135  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.175056  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.260520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.660591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.660622  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.675267  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.761379  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.160638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.161031  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.241441  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.261128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.641195  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:43.660811  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.660936  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.674877  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.761319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.160296  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.161343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.174926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.260471  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.660490  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.661342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.674851  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.760497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.160507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.160595  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.174852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.260568  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.660293  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.660999  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.674670  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.761087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.070190  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:46.160550  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.160867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.174270  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.260149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.660826  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.661696  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.676864  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.760955  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.160938  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.161615  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.175003  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.260783  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.660110  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.663272  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.701700  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.760283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.159939  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.160947  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.174393  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.261025  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.570860  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:48.660740  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.661222  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.761763  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.160005  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.160755  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.175182  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.260174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.661013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.661304  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.675895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.777512  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.160946  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.160950  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.174204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.259800  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.660357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.661468  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.674771  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.760091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.069537  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:51.160657  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.161375  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.174522  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.260449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.660943  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.661436  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.679949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.760555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.160884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.161969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.175511  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.260422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.660009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.661427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.674747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.760455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.069882  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:53.160723  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.160847  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.175048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.260265  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.660742  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.660975  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.675736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.760427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.160554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.175527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.261623  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.661044  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.661280  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.674256  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.762345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.161624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.161856  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.177557  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.260964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.571599  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:55.660145  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.661293  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.674636  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.760666  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.160746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.161295  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.174304  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.259893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.660305  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.661330  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.759937  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.161201  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.161367  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.174319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.259921  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.660452  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.661521  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.675492  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.760449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.071078  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:58.166319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.167684  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.174484  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.261744  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.739476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.740647  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.741278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.843925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.250851  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.348633  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.349162  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.352318  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.660355  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.662169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.737125  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.761343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.071258  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:00.161047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.161410  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.175212  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.261071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.661009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.662071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.674963  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.761260  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.160995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.161522  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.174377  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.261177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.660419  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.661825  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.675387  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.760448  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.071634  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:02.160982  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.161497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.175139  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.262015  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.660625  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.661137  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.676415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.760266  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.160315  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.161430  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.174874  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.260917  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.660127  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.661283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.760962  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.761328  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.160941  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.161529  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.175159  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.260532  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.570304  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:04.660567  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.661503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.675149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.761527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.160742  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.161438  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.175035  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.260884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.660133  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.661095  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.674647  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.760505  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.160998  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.161237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.175185  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.261772  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.570424  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:06.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.661433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.675129  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.761340  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.160439  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.161643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.175553  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.260491  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.661227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.661700  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.674758  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.769893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.160882  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.161229  761388 kapi.go:107] duration metric: took 1m18.003722545s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:41:08.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.260993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.570813  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:08.661066  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.675397  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.761869  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.163441  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.260343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.261680  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.661162  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.738749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.761895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.161848  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.174642  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.261127  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.638793  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:10.660408  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.737983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.761997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.160636  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.238753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.260239  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.661077  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.675809  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.760946  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.160226  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.174555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.260120  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.660888  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.675281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.070755  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:13.159900  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.175280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.260711  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.674228  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.675067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.761264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.160557  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.174803  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.260591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.675045  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.761376  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.070790  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:15.161017  761388 kapi.go:107] duration metric: took 1m25.005187502s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:15.174846  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.261085  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.675476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.837474  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.268231  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.268764  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.676196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.760827  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.176212  761388 kapi.go:107] duration metric: took 1m22.504803809s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:41:17.177857  761388 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685250 cluster.
	I0919 18:41:17.179198  761388 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:41:17.180644  761388 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:41:17.262198  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.570361  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:17.760518  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.261747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.761118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.260370  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.570826  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:19.761115  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.260708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.761013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.260276  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.571353  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:21.760456  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.260815  761388 kapi.go:107] duration metric: took 1m31.004968765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:41:22.262816  761388 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:41:22.264198  761388 addons.go:510] duration metric: took 1m38.250564753s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:41:24.069345  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:26.070338  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:28.571150  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:31.069639  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:33.069801  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:35.069951  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070152  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.570142  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.570373  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:44.069797  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.070575  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.570352  761388 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.570378  761388 pod_ready.go:82] duration metric: took 1m15.506104425s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.570389  761388 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574639  761388 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.574659  761388 pod_ready.go:82] duration metric: took 4.26409ms for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574677  761388 pod_ready.go:39] duration metric: took 1m17.927800889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:46.574695  761388 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:41:46.574727  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:46.574775  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:46.610505  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:46.610525  761388 cri.go:89] found id: ""
	I0919 18:41:46.610532  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:46.610585  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.614097  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:46.614166  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:46.647964  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:46.647984  761388 cri.go:89] found id: ""
	I0919 18:41:46.647992  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:46.648034  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.651737  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:46.651827  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:46.685728  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:46.685751  761388 cri.go:89] found id: ""
	I0919 18:41:46.685761  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:46.685842  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.689509  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:46.689602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:46.723120  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:46.723148  761388 cri.go:89] found id: ""
	I0919 18:41:46.723159  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:46.723206  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.726505  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:46.726561  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:46.764041  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.764067  761388 cri.go:89] found id: ""
	I0919 18:41:46.764076  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:46.764139  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.767386  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:46.767456  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:46.801334  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:46.801362  761388 cri.go:89] found id: ""
	I0919 18:41:46.801373  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:46.801437  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.804747  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:46.804810  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:46.838269  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:46.838289  761388 cri.go:89] found id: ""
	I0919 18:41:46.838297  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:46.838353  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.841583  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:46.841608  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:46.939796  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:46.939825  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.973962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:46.973996  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:47.040527  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:47.040563  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:47.079512  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:47.079548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:47.156835  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:47.156873  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:47.244389  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:47.244425  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:47.291698  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:47.291734  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:47.339857  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:47.339892  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:47.378377  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:47.378414  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:47.419595  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:47.419631  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:47.461066  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:47.461101  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:49.991902  761388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:41:50.006246  761388 api_server.go:72] duration metric: took 2m5.992641544s to wait for apiserver process to appear ...
	I0919 18:41:50.006277  761388 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:41:50.006316  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:50.006369  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:50.040275  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.040319  761388 cri.go:89] found id: ""
	I0919 18:41:50.040329  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:50.040373  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.043705  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:50.043766  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:50.078798  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.078819  761388 cri.go:89] found id: ""
	I0919 18:41:50.078826  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:50.078884  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.082274  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:50.082341  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:50.116003  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.116024  761388 cri.go:89] found id: ""
	I0919 18:41:50.116032  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:50.116082  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.119438  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:50.119496  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:50.153370  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.153390  761388 cri.go:89] found id: ""
	I0919 18:41:50.153398  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:50.153451  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.156934  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:50.156999  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:50.191346  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.191372  761388 cri.go:89] found id: ""
	I0919 18:41:50.191381  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:50.191442  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.195442  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:50.195523  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:50.230094  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.230116  761388 cri.go:89] found id: ""
	I0919 18:41:50.230126  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:50.230173  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.233591  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:50.233648  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:50.267946  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.267968  761388 cri.go:89] found id: ""
	I0919 18:41:50.267976  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:50.268020  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.271492  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:50.271521  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.315171  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:50.315204  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.350242  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:50.350276  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.406986  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:50.407024  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.443914  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:50.443950  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:50.522117  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:50.522161  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:50.603999  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:50.604036  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:50.633867  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:50.633909  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:50.735662  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:50.735694  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.778766  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:50.778800  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.822323  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:50.822362  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.858212  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:50.858244  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.402426  761388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:41:53.406334  761388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:41:53.407293  761388 api_server.go:141] control plane version: v1.31.1
	I0919 18:41:53.407337  761388 api_server.go:131] duration metric: took 3.401052443s to wait for apiserver health ...
	I0919 18:41:53.407348  761388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:41:53.407372  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:53.407424  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:53.442342  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:53.442368  761388 cri.go:89] found id: ""
	I0919 18:41:53.442378  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:53.442443  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.445843  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:53.445911  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:53.479392  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:53.479417  761388 cri.go:89] found id: ""
	I0919 18:41:53.479427  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:53.479483  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.482761  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:53.482821  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:53.517132  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.517157  761388 cri.go:89] found id: ""
	I0919 18:41:53.517169  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:53.517224  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.520542  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:53.520602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:53.554085  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.554107  761388 cri.go:89] found id: ""
	I0919 18:41:53.554116  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:53.554174  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.557699  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:53.557779  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:53.591682  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:53.591703  761388 cri.go:89] found id: ""
	I0919 18:41:53.591711  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:53.591755  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.595094  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:53.595172  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:53.630170  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.630192  761388 cri.go:89] found id: ""
	I0919 18:41:53.630199  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:53.630257  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.633583  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:53.633636  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:53.667431  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.667451  761388 cri.go:89] found id: ""
	I0919 18:41:53.667459  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:53.667505  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.670883  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:53.670906  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.707961  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:53.707993  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.749962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:53.749997  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.808507  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:53.808548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.843831  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:53.843860  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.886934  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:53.886962  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:53.965269  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:53.965305  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:54.000130  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:54.000165  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:54.102256  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:54.102283  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:54.180041  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:54.180082  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:54.225323  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:54.225355  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:54.270873  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:54.270914  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:56.816722  761388 system_pods.go:59] 19 kube-system pods found
	I0919 18:41:56.816754  761388 system_pods.go:61] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.816759  761388 system_pods.go:61] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.816763  761388 system_pods.go:61] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.816767  761388 system_pods.go:61] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.816770  761388 system_pods.go:61] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.816773  761388 system_pods.go:61] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.816777  761388 system_pods.go:61] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.816780  761388 system_pods.go:61] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.816783  761388 system_pods.go:61] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.816787  761388 system_pods.go:61] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.816791  761388 system_pods.go:61] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.816796  761388 system_pods.go:61] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.816800  761388 system_pods.go:61] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.816805  761388 system_pods.go:61] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.816814  761388 system_pods.go:61] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.816821  761388 system_pods.go:61] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.816825  761388 system_pods.go:61] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.816831  761388 system_pods.go:61] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.816836  761388 system_pods.go:61] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.816844  761388 system_pods.go:74] duration metric: took 3.409487976s to wait for pod list to return data ...
	I0919 18:41:56.816856  761388 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:41:56.819044  761388 default_sa.go:45] found service account: "default"
	I0919 18:41:56.819064  761388 default_sa.go:55] duration metric: took 2.201823ms for default service account to be created ...
	I0919 18:41:56.819072  761388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:41:56.827195  761388 system_pods.go:86] 19 kube-system pods found
	I0919 18:41:56.827219  761388 system_pods.go:89] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.827224  761388 system_pods.go:89] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.827229  761388 system_pods.go:89] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.827232  761388 system_pods.go:89] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.827236  761388 system_pods.go:89] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.827239  761388 system_pods.go:89] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.827243  761388 system_pods.go:89] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.827246  761388 system_pods.go:89] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.827250  761388 system_pods.go:89] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.827254  761388 system_pods.go:89] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.827258  761388 system_pods.go:89] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.827261  761388 system_pods.go:89] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.827264  761388 system_pods.go:89] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.827267  761388 system_pods.go:89] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.827270  761388 system_pods.go:89] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.827273  761388 system_pods.go:89] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.827276  761388 system_pods.go:89] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.827279  761388 system_pods.go:89] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.827282  761388 system_pods.go:89] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.827287  761388 system_pods.go:126] duration metric: took 8.210478ms to wait for k8s-apps to be running ...
	I0919 18:41:56.827294  761388 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:41:56.827364  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:41:56.838722  761388 system_svc.go:56] duration metric: took 11.419899ms WaitForService to wait for kubelet
	I0919 18:41:56.838749  761388 kubeadm.go:582] duration metric: took 2m12.825152378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:41:56.838775  761388 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:41:56.841799  761388 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:41:56.841823  761388 node_conditions.go:123] node cpu capacity is 8
	I0919 18:41:56.841837  761388 node_conditions.go:105] duration metric: took 3.056374ms to run NodePressure ...
	I0919 18:41:56.841850  761388 start.go:241] waiting for startup goroutines ...
	I0919 18:41:56.841857  761388 start.go:246] waiting for cluster config update ...
	I0919 18:41:56.841872  761388 start.go:255] writing updated cluster config ...
	I0919 18:41:56.842127  761388 ssh_runner.go:195] Run: rm -f paused
	I0919 18:41:56.891468  761388 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:41:56.894630  761388 out.go:177] * Done! kubectl is now configured to use "addons-685250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:58:50 addons-685250 crio[1028]: time="2024-09-19 18:58:50.360421025Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Sep 19 18:58:59 addons-685250 crio[1028]: time="2024-09-19 18:58:59.354554835Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=63a68e12-ad3d-49a1-ae90-b8353118d635 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:58:59 addons-685250 crio[1028]: time="2024-09-19 18:58:59.354821771Z" level=info msg="Image docker.io/nginx:alpine not found" id=63a68e12-ad3d-49a1-ae90-b8353118d635 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:01 addons-685250 crio[1028]: time="2024-09-19 18:59:01.353954461Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ef4a1fda-7c55-4d22-b959-76f7ad330c01 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:01 addons-685250 crio[1028]: time="2024-09-19 18:59:01.354231066Z" level=info msg="Image docker.io/nginx:latest not found" id=ef4a1fda-7c55-4d22-b959-76f7ad330c01 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:04 addons-685250 crio[1028]: time="2024-09-19 18:59:04.354289091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=46c9262e-ea86-41b4-b14f-ef4912c6cda1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:04 addons-685250 crio[1028]: time="2024-09-19 18:59:04.354489989Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=46c9262e-ea86-41b4-b14f-ef4912c6cda1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:10 addons-685250 crio[1028]: time="2024-09-19 18:59:10.354480872Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=0bff92d8-1327-4999-b596-47b73bacbe49 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:10 addons-685250 crio[1028]: time="2024-09-19 18:59:10.354701169Z" level=info msg="Image docker.io/nginx:alpine not found" id=0bff92d8-1327-4999-b596-47b73bacbe49 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:16 addons-685250 crio[1028]: time="2024-09-19 18:59:16.353532293Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c79e5b3b-4afc-4c01-bb29-29fda5fd361e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:16 addons-685250 crio[1028]: time="2024-09-19 18:59:16.353571787Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d7e7b3f7-de54-4b26-8b27-4c332a60c491 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:16 addons-685250 crio[1028]: time="2024-09-19 18:59:16.353801062Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d7e7b3f7-de54-4b26-8b27-4c332a60c491 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:16 addons-685250 crio[1028]: time="2024-09-19 18:59:16.353812921Z" level=info msg="Image docker.io/nginx:latest not found" id=c79e5b3b-4afc-4c01-bb29-29fda5fd361e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:23 addons-685250 crio[1028]: time="2024-09-19 18:59:23.354049864Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=b64cf795-5600-47dd-9571-30e618e23c0f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:23 addons-685250 crio[1028]: time="2024-09-19 18:59:23.354324184Z" level=info msg="Image docker.io/nginx:alpine not found" id=b64cf795-5600-47dd-9571-30e618e23c0f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:27 addons-685250 crio[1028]: time="2024-09-19 18:59:27.354183893Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a393d503-757b-4693-87b6-ef8b4645c444 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:27 addons-685250 crio[1028]: time="2024-09-19 18:59:27.354441487Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a393d503-757b-4693-87b6-ef8b4645c444 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:31 addons-685250 crio[1028]: time="2024-09-19 18:59:31.354221454Z" level=info msg="Checking image status: docker.io/nginx:latest" id=79d2cda2-a00a-482e-9d65-3c0c6619df25 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:31 addons-685250 crio[1028]: time="2024-09-19 18:59:31.354511788Z" level=info msg="Image docker.io/nginx:latest not found" id=79d2cda2-a00a-482e-9d65-3c0c6619df25 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:37 addons-685250 crio[1028]: time="2024-09-19 18:59:37.354202478Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=014e65e3-dd8d-41da-a7b5-1035e0ab1cb0 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:37 addons-685250 crio[1028]: time="2024-09-19 18:59:37.354471271Z" level=info msg="Image docker.io/nginx:alpine not found" id=014e65e3-dd8d-41da-a7b5-1035e0ab1cb0 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:41 addons-685250 crio[1028]: time="2024-09-19 18:59:41.354379383Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd523a4b-ad44-4238-b3a1-f36c7ff87e8d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:41 addons-685250 crio[1028]: time="2024-09-19 18:59:41.354646062Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dd523a4b-ad44-4238-b3a1-f36c7ff87e8d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:43 addons-685250 crio[1028]: time="2024-09-19 18:59:43.354513352Z" level=info msg="Checking image status: docker.io/nginx:latest" id=de55a7dd-35f6-4a4a-8c19-a062475c4d87 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:59:43 addons-685250 crio[1028]: time="2024-09-19 18:59:43.354828515Z" level=info msg="Image docker.io/nginx:latest not found" id=de55a7dd-35f6-4a4a-8c19-a062475c4d87 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9631f3dbcf504       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          18 minutes ago      Running             csi-snapshotter                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	96030830b51d1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          18 minutes ago      Running             csi-provisioner                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	32bc4d23668fc       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            18 minutes ago      Running             liveness-probe                           0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	0cc2312cf82a4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 minutes ago      Running             hostpath                                 0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	8763c1c636d0e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 18 minutes ago      Running             gcp-auth                                 0                   c4905e6f06668       gcp-auth-89d5ffd79-5xmj7
	6ec44220259bc       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             18 minutes ago      Running             controller                               0                   7eeed172b87cd       ingress-nginx-controller-bc57996ff-jwqfz
	533fe244bc19f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                18 minutes ago      Running             node-driver-registrar                    0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	781e8a586344e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              18 minutes ago      Running             csi-resizer                              0                   79d20db0c7bd8       csi-hostpath-resizer-0
	135118d48b8e5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   18 minutes ago      Exited              patch                                    0                   b5047ec8d653b       ingress-nginx-admission-patch-zkk9z
	6148ff93b7e21       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      18 minutes ago      Running             volume-snapshot-controller               0                   2c111431a9537       snapshot-controller-56fcc65765-hpwtx
	776cccb0a5bb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   18 minutes ago      Running             csi-external-health-monitor-controller   0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	ae42c7830ff31       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      18 minutes ago      Running             volume-snapshot-controller               0                   a67d1128cd369       snapshot-controller-56fcc65765-qsngh
	3bae675b3b545       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   18 minutes ago      Exited              create                                   0                   00fa51ee04653       ingress-nginx-admission-create-rqqsb
	cd361280e82f5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             19 minutes ago      Running             csi-attacher                             0                   995144454e795       csi-hostpath-attacher-0
	71455e9d9d7f9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             19 minutes ago      Running             minikube-ingress-dns                     0                   1b3ebc5c0bddd       kube-ingress-dns-minikube
	c265d33c64155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             19 minutes ago      Running             storage-provisioner                      0                   f0b8765d93237       storage-provisioner
	61dc325585534       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             19 minutes ago      Running             coredns                                  0                   70191f5a80edd       coredns-7c65d6cfc9-xxkrh
	28c707c30998a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             19 minutes ago      Running             kindnet-cni                              0                   d0d4a24bd5f33       kindnet-nr24c
	1577029617c13       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             19 minutes ago      Running             kube-proxy                               0                   006fe668e3bca       kube-proxy-tt5h8
	a9c5d6500618f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             20 minutes ago      Running             kube-scheduler                           0                   6a497d68d67db       kube-scheduler-addons-685250
	4b38bddc95b37       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             20 minutes ago      Running             kube-controller-manager                  0                   8dc935b2a1118       kube-controller-manager-addons-685250
	daa04e6dadb8c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             20 minutes ago      Running             etcd                                     0                   49d2cd4b861cb       etcd-addons-685250
	d48e736f52b35       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             20 minutes ago      Running             kube-apiserver                           0                   ee84a44e45fe4       kube-apiserver-addons-685250
	
	
	==> coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] <==
	[INFO] 10.244.0.18:34436 - 35698 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108309s
	[INFO] 10.244.0.18:53834 - 64751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039533s
	[INFO] 10.244.0.18:53834 - 26861 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063287s
	[INFO] 10.244.0.18:40724 - 19030 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005948549s
	[INFO] 10.244.0.18:40724 - 2384 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00624164s
	[INFO] 10.244.0.18:55178 - 49717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004779846s
	[INFO] 10.244.0.18:55178 - 43576 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008989283s
	[INFO] 10.244.0.18:35236 - 29185 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005503532s
	[INFO] 10.244.0.18:35236 - 29053 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006569969s
	[INFO] 10.244.0.18:58901 - 23064 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007067s
	[INFO] 10.244.0.18:58901 - 45339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090322s
	[INFO] 10.244.0.21:52948 - 4177 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227224s
	[INFO] 10.244.0.21:45787 - 22571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317788s
	[INFO] 10.244.0.21:59704 - 52899 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152904s
	[INFO] 10.244.0.21:50018 - 4022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239218s
	[INFO] 10.244.0.21:53553 - 39101 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141888s
	[INFO] 10.244.0.21:37741 - 20732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000217668s
	[INFO] 10.244.0.21:55394 - 50618 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005906983s
	[INFO] 10.244.0.21:37603 - 64460 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00595091s
	[INFO] 10.244.0.21:43538 - 27403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006051611s
	[INFO] 10.244.0.21:54216 - 9854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00637344s
	[INFO] 10.244.0.21:36139 - 65099 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007481578s
	[INFO] 10.244.0.21:49105 - 14009 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010893085s
	[INFO] 10.244.0.21:52556 - 17077 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000849386s
	[INFO] 10.244.0.21:56780 - 3812 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000933647s
	
	
	==> describe nodes <==
	Name:               addons-685250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-685250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685250
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-685250"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685250
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:59:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:56:49 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:56:49 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:56:49 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:56:49 +0000   Thu, 19 Sep 2024 18:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-685250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 59964951ae744ca891a1d33d48395cb6
	  System UUID:                ca4c5e3c-dd72-4ffd-b420-cdf7d87c497b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  gcp-auth                    gcp-auth-89d5ffd79-5xmj7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jwqfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-xxkrh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 csi-hostpathplugin-wvvls                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-addons-685250                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         20m
	  kube-system                 kindnet-nr24c                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19m
	  kube-system                 kube-apiserver-addons-685250                250m (3%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-addons-685250       200m (2%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-tt5h8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-addons-685250                100m (1%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 snapshot-controller-56fcc65765-hpwtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 snapshot-controller-56fcc65765-qsngh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m                kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20m                node-controller  Node addons-685250 event: Registered Node addons-685250 in Controller
	  Normal   NodeReady                19m                kubelet          Node addons-685250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 9c 9b da 37 02 42 c0 a8 55 02 08 00
	[ +49.810034] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] <==
	{"level":"info","ts":"2024-09-19T18:39:45.856224Z","caller":"traceutil/trace.go:171","msg":"trace[83912261] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"202.035355ms","start":"2024-09-19T18:39:45.654180Z","end":"2024-09-19T18:39:45.856215Z","steps":["trace[83912261] 'agreement among raft nodes before linearized reading'  (duration: 201.947574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.947549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856402Z","caller":"traceutil/trace.go:171","msg":"trace[297556485] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:395; }","duration":"206.977474ms","start":"2024-09-19T18:39:45.649415Z","end":"2024-09-19T18:39:45.856393Z","steps":["trace[297556485] 'agreement among raft nodes before linearized reading'  (duration: 206.93087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.416757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856554Z","caller":"traceutil/trace.go:171","msg":"trace[47804488] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:395; }","duration":"103.442648ms","start":"2024-09-19T18:39:45.753105Z","end":"2024-09-19T18:39:45.856548Z","steps":["trace[47804488] 'agreement among raft nodes before linearized reading'  (duration: 103.402348ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.450928Z","caller":"traceutil/trace.go:171","msg":"trace[447015363] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"192.15555ms","start":"2024-09-19T18:39:46.258754Z","end":"2024-09-19T18:39:46.450910Z","steps":["trace[447015363] 'process raft request'  (duration: 192.041293ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.457451Z","caller":"traceutil/trace.go:171","msg":"trace[199583041] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.841342ms","start":"2024-09-19T18:39:46.354595Z","end":"2024-09-19T18:39:46.457437Z","steps":["trace[199583041] 'process raft request'  (duration: 102.766841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:47.149186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.608135ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005940909206 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" mod_revision:386 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" value_size:3943 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:47.149875Z","caller":"traceutil/trace.go:171","msg":"trace[786871471] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"212.562991ms","start":"2024-09-19T18:39:46.937292Z","end":"2024-09-19T18:39:47.149855Z","steps":["trace[786871471] 'process raft request'  (duration: 110.633244ms)","trace[786871471] 'compare'  (duration: 100.378906ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:47.150124Z","caller":"traceutil/trace.go:171","msg":"trace[713102619] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"212.118368ms","start":"2024-09-19T18:39:46.937993Z","end":"2024-09-19T18:39:47.150111Z","steps":["trace[713102619] 'process raft request'  (duration: 211.29202ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150315Z","caller":"traceutil/trace.go:171","msg":"trace[1466387580] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"203.943604ms","start":"2024-09-19T18:39:46.946361Z","end":"2024-09-19T18:39:47.150305Z","steps":["trace[1466387580] 'process raft request'  (duration: 203.030294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150417Z","caller":"traceutil/trace.go:171","msg":"trace[1484778379] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"202.338487ms","start":"2024-09-19T18:39:46.948072Z","end":"2024-09-19T18:39:47.150411Z","steps":["trace[1484778379] 'process raft request'  (duration: 201.364589ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150492Z","caller":"traceutil/trace.go:171","msg":"trace[1762014815] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:419; }","duration":"204.192549ms","start":"2024-09-19T18:39:46.946292Z","end":"2024-09-19T18:39:47.150485Z","steps":["trace[1762014815] 'read index received'  (duration: 101.644452ms)","trace[1762014815] 'applied index is now lower than readState.Index'  (duration: 102.547441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:39:47.150718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.417513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:47.150742Z","caller":"traceutil/trace.go:171","msg":"trace[30934350] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"204.449131ms","start":"2024-09-19T18:39:46.946286Z","end":"2024-09-19T18:39:47.150735Z","steps":["trace[30934350] 'agreement among raft nodes before linearized reading'  (duration: 204.399184ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:41:08.113307Z","caller":"traceutil/trace.go:171","msg":"trace[1867049731] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"218.87531ms","start":"2024-09-19T18:41:07.893123Z","end":"2024-09-19T18:41:08.111998Z","steps":["trace[1867049731] 'process raft request'  (duration: 146.821964ms)","trace[1867049731] 'compare'  (duration: 71.937946ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:49:35.458285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1609}
	{"level":"info","ts":"2024-09-19T18:49:35.481341Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1609,"took":"22.590141ms","hash":3032817660,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3510272,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-19T18:49:35.481386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3032817660,"revision":1609,"compact-revision":-1}
	{"level":"info","ts":"2024-09-19T18:54:35.463171Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2033}
	{"level":"info","ts":"2024-09-19T18:54:35.479457Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2033,"took":"15.735537ms","hash":3624308866,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":4227072,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-19T18:54:35.479504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3624308866,"revision":2033,"compact-revision":1609}
	{"level":"info","ts":"2024-09-19T18:59:35.467496Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2808}
	{"level":"info","ts":"2024-09-19T18:59:35.486031Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2808,"took":"17.968616ms","hash":1894991277,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3874816,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-19T18:59:35.486074Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1894991277,"revision":2808,"compact-revision":2033}
	
	
	==> gcp-auth [8763c1c636d0e544cec68dd7fd43a6178da8c1609fed0cf08b900e90bcd721ae] <==
	2024/09/19 18:41:56 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:06 Ready to marshal response ...
	2024/09/19 18:50:06 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:51:33 Ready to marshal response ...
	2024/09/19 18:51:33 Ready to write response ...
	2024/09/19 18:51:42 Ready to marshal response ...
	2024/09/19 18:51:42 Ready to write response ...
	
	
	==> kernel <==
	 18:59:44 up  3:42,  0 users,  load average: 0.27, 0.18, 0.36
	Linux addons-685250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] <==
	I0919 18:57:38.359067       1 main.go:299] handling current node
	I0919 18:57:48.351184       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:48.351225       1 main.go:299] handling current node
	I0919 18:57:58.351370       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:57:58.351403       1 main.go:299] handling current node
	I0919 18:58:08.351222       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:08.351256       1 main.go:299] handling current node
	I0919 18:58:18.352751       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:18.352787       1 main.go:299] handling current node
	I0919 18:58:28.351387       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:28.351429       1 main.go:299] handling current node
	I0919 18:58:38.352227       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:38.352267       1 main.go:299] handling current node
	I0919 18:58:48.351241       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:48.351274       1 main.go:299] handling current node
	I0919 18:58:58.350870       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:58:58.350915       1 main.go:299] handling current node
	I0919 18:59:08.352173       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:59:08.352240       1 main.go:299] handling current node
	I0919 18:59:18.351928       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:59:18.351983       1 main.go:299] handling current node
	I0919 18:59:28.351379       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:59:28.351422       1 main.go:299] handling current node
	I0919 18:59:38.355375       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:59:38.355409       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] <==
	 > logger="UnhandledError"
	E0919 18:41:46.384826       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.77.71:443: connect: connection refused" logger="UnhandledError"
	I0919 18:41:46.398246       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 18:50:10.564173       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.569821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.575508       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:25.576915       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:30.878332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:31.884590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:32.891043       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:33.897594       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:34.904265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:35.910640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:36.916660       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:37.922615       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:38.928704       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:39.935718       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 18:50:59.939369       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.7.39"}
	I0919 18:51:21.107714       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:51:22.123982       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0919 18:51:39.581185       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.29:41094: read: connection reset by peer
	E0919 18:51:41.443959       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:51:42.224676       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:51:42.394849       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.136.235"}
	I0919 18:55:47.437366       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] <==
	I0919 18:51:52.413869       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0919 18:51:57.889795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:57.889847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:27.558659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:27.558704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:24.382837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:24.382902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:58.320420       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:58.320480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:36.903837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:36.903888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:34.730951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:34.731007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:36.833056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.157µs"
	W0919 18:56:27.467107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:56:27.467158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:56:49.477901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	W0919 18:57:00.833849       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:57:00.833894       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:57:41.206693       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:57:41.206750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:58:28.018816       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:58:28.018877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:59:20.489347       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:59:20.489401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] <==
	I0919 18:39:47.957278       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:39:49.044392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:39:49.044560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:49.357227       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:39:49.357310       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:49.437470       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:49.438149       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:49.438227       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:49.444383       1 config.go:199] "Starting service config controller"
	I0919 18:39:49.444434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:49.444451       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:49.444468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:49.445015       1 config.go:328] "Starting node config controller"
	I0919 18:39:49.445038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:49.544520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:49.544894       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:39:49.545185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] <==
	W0919 18:39:36.759688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 18:39:36.759698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:36.759716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:36.759719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:39:36.759767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0919 18:39:36.759715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.577548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:37.577594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.591157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:37.591194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.662233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:37.662283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:39:37.691889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:39:37.691945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.788039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:37.788093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.902881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:37.902929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.943554       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:37.943606       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:37.964311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:37.964357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:40.957211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:58:50 addons-685250 kubelet[1619]: E0919 18:58:50.381235    1619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pbctc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name
:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox_default(c9e71acf-38e0-445c-9d8f-3735cbf69aa1): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" logger="UnhandledError"
	Sep 19 18:58:50 addons-685250 kubelet[1619]: E0919 18:58:50.382443    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:58:59 addons-685250 kubelet[1619]: E0919 18:58:59.355085    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:58:59 addons-685250 kubelet[1619]: E0919 18:58:59.699904    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772339699683949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:58:59 addons-685250 kubelet[1619]: E0919 18:58:59.699947    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772339699683949,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:01 addons-685250 kubelet[1619]: E0919 18:59:01.354466    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:59:04 addons-685250 kubelet[1619]: E0919 18:59:04.354781    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:59:09 addons-685250 kubelet[1619]: E0919 18:59:09.701935    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772349701735643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:09 addons-685250 kubelet[1619]: E0919 18:59:09.701978    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772349701735643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:10 addons-685250 kubelet[1619]: E0919 18:59:10.354961    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:59:16 addons-685250 kubelet[1619]: E0919 18:59:16.354060    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:59:16 addons-685250 kubelet[1619]: E0919 18:59:16.354099    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:59:19 addons-685250 kubelet[1619]: E0919 18:59:19.704180    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772359703983688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:19 addons-685250 kubelet[1619]: E0919 18:59:19.704213    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772359703983688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:23 addons-685250 kubelet[1619]: E0919 18:59:23.354580    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:59:27 addons-685250 kubelet[1619]: E0919 18:59:27.354726    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:59:29 addons-685250 kubelet[1619]: E0919 18:59:29.706680    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772369706495367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:29 addons-685250 kubelet[1619]: E0919 18:59:29.706714    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772369706495367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:31 addons-685250 kubelet[1619]: E0919 18:59:31.354813    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:59:37 addons-685250 kubelet[1619]: E0919 18:59:37.354800    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:59:39 addons-685250 kubelet[1619]: E0919 18:59:39.368428    1619 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf, memory: /docker/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/system.slice/kubelet.service"
	Sep 19 18:59:39 addons-685250 kubelet[1619]: E0919 18:59:39.709238    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772379708985270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:39 addons-685250 kubelet[1619]: E0919 18:59:39.709275    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772379708985270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:59:41 addons-685250 kubelet[1619]: E0919 18:59:41.354880    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:59:43 addons-685250 kubelet[1619]: E0919 18:59:43.355026    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	
	
	==> storage-provisioner [c265d33c64155de4fde21bb6eae221bdd5a2524b7a15aa0b673f23ce4f17b12d] <==
	I0919 18:40:29.640679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:29.648412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:29.648464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:29.655439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:29.655525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3690d0-7216-4b96-a260-4e04cffeb393", APIVersion:"v1", ResourceVersion:"963", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685250_e66922b4-9073-4377-9148-47e4da8ece38 became leader
	I0919 18:40:29.655628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	I0919 18:40:29.756484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1 (82.355521ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:41:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbctc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbctc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  17m                   default-scheduler  Successfully assigned default/busybox to addons-685250
	  Normal   Pulling    16m (x4 over 17m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     16m (x4 over 17m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     16m (x4 over 17m)     kubelet            Error: ErrImagePull
	  Warning  Failed     16m (x6 over 17m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m43s (x60 over 17m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:51:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8nj8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w8nj8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m2s                   default-scheduler  Successfully assigned default/nginx to addons-685250
	  Warning  Failed     4m35s                  kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m44s (x4 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m13s (x3 over 7m31s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m13s (x4 over 7m31s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m50s (x7 over 7m31s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m50s (x7 over 7m31s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:50:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzftq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mzftq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m38s                  default-scheduler  Successfully assigned default/task-pv-pod to addons-685250
	  Warning  Failed     8m52s                  kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m30s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m43s (x4 over 9m38s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     5m12s (x4 over 8m52s)  kubelet            Error: ErrImagePull
	  Warning  Failed     5m12s (x2 over 8m7s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m1s (x6 over 8m51s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m35s (x8 over 8m51s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqqsb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zkk9z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1
--- FAIL: TestAddons/parallel/Ingress (482.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (339.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.451443ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0919 18:49:59.789783  760079 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 18:49:59.789809  760079 kapi.go:107] duration metric: took 4.024515ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00413777s
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (67.92145ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 10m19.858320779s

                                                
                                                
** /stderr **
I0919 18:50:04.861393  760079 retry.go:31] will retry after 3.328453268s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (69.450122ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 10m23.257195019s

                                                
                                                
** /stderr **
I0919 18:50:08.260365  760079 retry.go:31] will retry after 4.849964628s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (66.136492ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 10m28.175253403s

                                                
                                                
** /stderr **
I0919 18:50:13.177716  760079 retry.go:31] will retry after 8.839209852s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (69.093052ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 10m37.084655022s

                                                
                                                
** /stderr **
I0919 18:50:22.087097  760079 retry.go:31] will retry after 10.479047037s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (68.416217ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 10m47.632731535s

                                                
                                                
** /stderr **
I0919 18:50:32.635026  760079 retry.go:31] will retry after 22.277841643s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (67.271531ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 11m9.979085619s

                                                
                                                
** /stderr **
I0919 18:50:54.981357  760079 retry.go:31] will retry after 33.334003742s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (65.172339ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 11m43.378923248s

                                                
                                                
** /stderr **
I0919 18:51:28.381795  760079 retry.go:31] will retry after 45.561442507s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (65.667594ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 12m29.007232064s

                                                
                                                
** /stderr **
I0919 18:52:14.009870  760079 retry.go:31] will retry after 32.489608244s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (69.298391ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 13m1.56673438s

                                                
                                                
** /stderr **
I0919 18:52:46.569617  760079 retry.go:31] will retry after 43.147501573s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (65.17666ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 13m44.781122525s

                                                
                                                
** /stderr **
I0919 18:53:29.783614  760079 retry.go:31] will retry after 59.850608546s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (66.53393ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 14m44.698144588s

                                                
                                                
** /stderr **
I0919 18:54:29.701251  760079 retry.go:31] will retry after 1m6.626665505s: exit status 1
addons_test.go:417: (dbg) Run:  kubectl --context addons-685250 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-685250 top pods -n kube-system: exit status 1 (65.620752ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-xxkrh, age: 15m51.391450157s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-685250
helpers_test.go:235: (dbg) docker inspect addons-685250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf",
	        "Created": "2024-09-19T18:39:26.544485958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 762128,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:26.653035442Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hosts",
	        "LogPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf-json.log",
	        "Name": "/addons-685250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-685250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-685250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-685250",
	                "Source": "/var/lib/docker/volumes/addons-685250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-685250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-685250",
	                "name.minikube.sigs.k8s.io": "addons-685250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1b0ccece079b2c012374acf46f9c349cae0c8bd9ae1a208e2d0acc049d21c7cb",
	            "SandboxKey": "/var/run/docker/netns/1b0ccece079b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-685250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3c159902c31cb41244d3423728e25a3f29e7e8e24a95c6da692d29e053f66798",
	                    "EndpointID": "51640df6c09057e35d4d5a9f04688e387f2981906971ee1afa85b24730ac60a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-685250",
	                        "cdadbc576653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685250 -n addons-685250
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 logs -n 25: (1.266609064s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-759185                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | download-docker-985684                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-985684                                                                   | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | binary-mirror-515604                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32895                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-515604                                                                     | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-685250 --wait=true                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-685250 ssh cat                                                                       | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | /opt/local-path-provisioner/pvc-83c31ed0-fc42-4249-94b0-a7e77464cc71_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-685250 ip                                                                            | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:03.200212  761388 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:03.200467  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200476  761388 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:03.200481  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200718  761388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 18:39:03.201426  761388 out.go:352] Setting JSON to false
	I0919 18:39:03.202398  761388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12093,"bootTime":1726759050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:03.202515  761388 start.go:139] virtualization: kvm guest
	I0919 18:39:03.204903  761388 out.go:177] * [addons-685250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:39:03.206237  761388 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:39:03.206258  761388 notify.go:220] Checking for updates...
	I0919 18:39:03.208919  761388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:03.210261  761388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:03.211535  761388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:39:03.212802  761388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:39:03.213964  761388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:39:03.215359  761388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:03.237406  761388 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:03.237534  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.283495  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.274719559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.283600  761388 docker.go:318] overlay module found
	I0919 18:39:03.286271  761388 out.go:177] * Using the docker driver based on user configuration
	I0919 18:39:03.287521  761388 start.go:297] selected driver: docker
	I0919 18:39:03.287534  761388 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:03.287545  761388 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:39:03.288361  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.333412  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.324780201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.333593  761388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:03.333839  761388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:39:03.335585  761388 out.go:177] * Using Docker driver with root privileges
	I0919 18:39:03.336930  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:03.336986  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:03.336997  761388 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:03.337090  761388 start.go:340] cluster config:
	{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:03.338526  761388 out.go:177] * Starting "addons-685250" primary control-plane node in "addons-685250" cluster
	I0919 18:39:03.339809  761388 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:03.340995  761388 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:03.342026  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:03.342057  761388 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:03.342055  761388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:03.342063  761388 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:03.342182  761388 preload.go:172] Found /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:39:03.342194  761388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:39:03.342520  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:03.342542  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json: {Name:mk74efcccadcff6ea4a0787d2832be4be3984d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:03.359223  761388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:03.359412  761388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:03.359431  761388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:03.359435  761388 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:03.359442  761388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:03.359450  761388 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:14.708408  761388 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:14.708455  761388 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:14.708519  761388 start.go:360] acquireMachinesLock for addons-685250: {Name:mk56c74bc959dec1fb8992b737e0e35c0cd4ad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:14.708642  761388 start.go:364] duration metric: took 84.107µs to acquireMachinesLock for "addons-685250"
	I0919 18:39:14.708671  761388 start.go:93] Provisioning new machine with config: &{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:14.708780  761388 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:14.710766  761388 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:14.711013  761388 start.go:159] libmachine.API.Create for "addons-685250" (driver="docker")
	I0919 18:39:14.711068  761388 client.go:168] LocalClient.Create starting
	I0919 18:39:14.711150  761388 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem
	I0919 18:39:14.824308  761388 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem
	I0919 18:39:15.025789  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:15.041206  761388 cli_runner.go:211] docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:15.041292  761388 network_create.go:284] running [docker network inspect addons-685250] to gather additional debugging logs...
	I0919 18:39:15.041313  761388 cli_runner.go:164] Run: docker network inspect addons-685250
	W0919 18:39:15.056441  761388 cli_runner.go:211] docker network inspect addons-685250 returned with exit code 1
	I0919 18:39:15.056478  761388 network_create.go:287] error running [docker network inspect addons-685250]: docker network inspect addons-685250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-685250 not found
	I0919 18:39:15.056490  761388 network_create.go:289] output of [docker network inspect addons-685250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-685250 not found
	
	** /stderr **
	I0919 18:39:15.056606  761388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:15.072776  761388 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001446920}
	I0919 18:39:15.072824  761388 network_create.go:124] attempt to create docker network addons-685250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:15.072890  761388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-685250 addons-685250
	I0919 18:39:15.132522  761388 network_create.go:108] docker network addons-685250 192.168.49.0/24 created
	I0919 18:39:15.132554  761388 kic.go:121] calculated static IP "192.168.49.2" for the "addons-685250" container
	I0919 18:39:15.132644  761388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:15.147671  761388 cli_runner.go:164] Run: docker volume create addons-685250 --label name.minikube.sigs.k8s.io=addons-685250 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:15.163961  761388 oci.go:103] Successfully created a docker volume addons-685250
	I0919 18:39:15.164048  761388 cli_runner.go:164] Run: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:22.072772  761388 cli_runner.go:217] Completed: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (6.908674607s)
	I0919 18:39:22.072803  761388 oci.go:107] Successfully prepared a docker volume addons-685250
	I0919 18:39:22.072836  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:22.072868  761388 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:22.072944  761388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:26.483616  761388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41062526s)
	I0919 18:39:26.483649  761388 kic.go:203] duration metric: took 4.410778812s to extract preloaded images to volume ...
	W0919 18:39:26.483780  761388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:26.483868  761388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:26.529192  761388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-685250 --name addons-685250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-685250 --network addons-685250 --ip 192.168.49.2 --volume addons-685250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:26.802037  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Running}}
	I0919 18:39:26.820911  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:26.839572  761388 cli_runner.go:164] Run: docker exec addons-685250 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:26.880131  761388 oci.go:144] the created container "addons-685250" has a running status.
	I0919 18:39:26.880165  761388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa...
	I0919 18:39:27.339670  761388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:27.361758  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.379045  761388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:27.379068  761388 kic_runner.go:114] Args: [docker exec --privileged addons-685250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:27.421090  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.437982  761388 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:27.438079  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.456233  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.456524  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.456542  761388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:27.594819  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.594862  761388 ubuntu.go:169] provisioning hostname "addons-685250"
	I0919 18:39:27.594952  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.613368  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.613592  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.613622  761388 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685250 && echo "addons-685250" | sudo tee /etc/hostname
	I0919 18:39:27.754187  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.754262  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.771895  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.772132  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.772152  761388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:27.903239  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:27.903269  761388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-753213/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-753213/.minikube}
	I0919 18:39:27.903324  761388 ubuntu.go:177] setting up certificates
	I0919 18:39:27.903341  761388 provision.go:84] configureAuth start
	I0919 18:39:27.903404  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:27.919357  761388 provision.go:143] copyHostCerts
	I0919 18:39:27.919427  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/key.pem (1679 bytes)
	I0919 18:39:27.919543  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/ca.pem (1082 bytes)
	I0919 18:39:27.919618  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/cert.pem (1123 bytes)
	I0919 18:39:27.919681  761388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem org=jenkins.addons-685250 san=[127.0.0.1 192.168.49.2 addons-685250 localhost minikube]
	I0919 18:39:28.160212  761388 provision.go:177] copyRemoteCerts
	I0919 18:39:28.160283  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:28.160320  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.177005  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.271718  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:39:28.293331  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:28.314500  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:28.335572  761388 provision.go:87] duration metric: took 432.21249ms to configureAuth
	I0919 18:39:28.335604  761388 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:28.335790  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:28.335896  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.352244  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:28.352438  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:28.352454  761388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:39:28.570762  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:39:28.570788  761388 machine.go:96] duration metric: took 1.132783666s to provisionDockerMachine
	I0919 18:39:28.570801  761388 client.go:171] duration metric: took 13.859723313s to LocalClient.Create
	I0919 18:39:28.570823  761388 start.go:167] duration metric: took 13.859810827s to libmachine.API.Create "addons-685250"
	I0919 18:39:28.570832  761388 start.go:293] postStartSetup for "addons-685250" (driver="docker")
	I0919 18:39:28.570846  761388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:28.570928  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:28.570969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.587920  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.684315  761388 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:28.687444  761388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:28.687482  761388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:28.687493  761388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:28.687502  761388 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:28.687516  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/addons for local assets ...
	I0919 18:39:28.687596  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/files for local assets ...
	I0919 18:39:28.687629  761388 start.go:296] duration metric: took 116.788714ms for postStartSetup
	I0919 18:39:28.687939  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.704801  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:28.705071  761388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:28.705124  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.721672  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.816217  761388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:28.820354  761388 start.go:128] duration metric: took 14.111556683s to createHost
	I0919 18:39:28.820377  761388 start.go:83] releasing machines lock for "addons-685250", held for 14.111720986s
	I0919 18:39:28.820433  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.837043  761388 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:28.837093  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.837137  761388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:28.837212  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.853306  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.853640  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:29.015641  761388 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:29.019690  761388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:39:29.156274  761388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:29.160605  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.178821  761388 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:29.178900  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.204313  761388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:29.204337  761388 start.go:495] detecting cgroup driver to use...
	I0919 18:39:29.204370  761388 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:29.204409  761388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:39:29.218099  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:39:29.228094  761388 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:39:29.228158  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:39:29.240433  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:39:29.253142  761388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:39:29.326278  761388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:39:29.406802  761388 docker.go:233] disabling docker service ...
	I0919 18:39:29.406859  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:39:29.424951  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:39:29.435168  761388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:39:29.514566  761388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:39:29.591355  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:39:29.601869  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:29.616535  761388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:39:29.616600  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.625293  761388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:39:29.625347  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.634150  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.642705  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.651092  761388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:29.659117  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.667830  761388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.681755  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.690617  761388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:29.698112  761388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:29.705724  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:29.785529  761388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:39:29.878210  761388 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:39:29.878295  761388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:39:29.881824  761388 start.go:563] Will wait 60s for crictl version
	I0919 18:39:29.881889  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:39:29.884918  761388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:29.918116  761388 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:39:29.918200  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.952309  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.988286  761388 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:39:29.989606  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:30.005833  761388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:30.009469  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.020164  761388 kubeadm.go:883] updating cluster {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:30.020281  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:30.020325  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.083858  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.083879  761388 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:39:30.083926  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.116167  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.116190  761388 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:30.116199  761388 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:39:30.116364  761388 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-685250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:30.116428  761388 ssh_runner.go:195] Run: crio config
	I0919 18:39:30.156650  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:30.156675  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:30.156688  761388 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:30.156711  761388 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685250 NodeName:addons-685250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:30.156845  761388 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:30.156908  761388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:30.165387  761388 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:30.165448  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:30.173207  761388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:39:30.188946  761388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:30.205638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:39:30.222877  761388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:30.226085  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.236096  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:30.319405  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:30.332104  761388 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250 for IP: 192.168.49.2
	I0919 18:39:30.332125  761388 certs.go:194] generating shared ca certs ...
	I0919 18:39:30.332140  761388 certs.go:226] acquiring lock for ca certs: {Name:mkac4e621bd7a8886df3f6838bd34b99172c371a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.332275  761388 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key
	I0919 18:39:30.528690  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt ...
	I0919 18:39:30.528724  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt: {Name:mked4ee6d8831516d03c840d59935532e3f21cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.528941  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key ...
	I0919 18:39:30.528958  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key: {Name:mkcb02ba3f86d66b352caba2841d6dd380f76edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.529067  761388 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key
	I0919 18:39:30.624034  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt ...
	I0919 18:39:30.624068  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt: {Name:mkaa7904f1d229a9140b6f62d1d672cf00a2f2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624277  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key ...
	I0919 18:39:30.624295  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key: {Name:mkb6bb0d0409e9bd1f254506994f2a2447e5cc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624398  761388 certs.go:256] generating profile certs ...
	I0919 18:39:30.624464  761388 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key
	I0919 18:39:30.624490  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt with IP's: []
	I0919 18:39:30.752151  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt ...
	I0919 18:39:30.752185  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: {Name:mk69a3ec8793b5371f583f88b2bebacea2af07ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752390  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key ...
	I0919 18:39:30.752406  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key: {Name:mk7d143fc1d3dd645310e55acf6f951beafc9848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752506  761388 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966
	I0919 18:39:30.752526  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:30.915660  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 ...
	I0919 18:39:30.915697  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966: {Name:mkdb41eb017de5d424bda2067b62b8ceafaf07c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.915911  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 ...
	I0919 18:39:30.915931  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966: {Name:mkbc3d5e5a7473c69994a57b2f0a8b8707ffe9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.916041  761388 certs.go:381] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt
	I0919 18:39:30.916130  761388 certs.go:385] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key
	I0919 18:39:30.916176  761388 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key
	I0919 18:39:30.916195  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt with IP's: []
	I0919 18:39:31.094514  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt ...
	I0919 18:39:31.094599  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt: {Name:mk9dc2f777ee8d63ffc9f5a10453c45f6382bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094776  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key ...
	I0919 18:39:31.094791  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key: {Name:mk32678ed11fe18054a48114b5283e466fb989c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094999  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:31.095055  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:39:31.095092  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:31.095124  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem (1679 bytes)
	I0919 18:39:31.095878  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:31.120600  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:31.142506  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:31.164187  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:31.185942  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:31.207396  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:31.229449  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:31.250877  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:39:31.272098  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:31.293403  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:31.308896  761388 ssh_runner.go:195] Run: openssl version
	I0919 18:39:31.314017  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:31.322554  761388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325634  761388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325693  761388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.331892  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:31.340220  761388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:31.343178  761388 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:31.343230  761388 kubeadm.go:392] StartCluster: {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:31.343328  761388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:39:31.343377  761388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:39:31.376569  761388 cri.go:89] found id: ""
	I0919 18:39:31.376645  761388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:31.384955  761388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:31.393013  761388 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:31.393065  761388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:31.400980  761388 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:31.400998  761388 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:31.401035  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:31.408813  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:31.408861  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:31.416662  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:31.424342  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:31.424386  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:31.431658  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.438947  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:31.438996  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.445986  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:31.453391  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:31.453444  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:31.460734  761388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:31.495835  761388 kubeadm.go:310] W0919 18:39:31.495183    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.496393  761388 kubeadm.go:310] W0919 18:39:31.495823    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.513844  761388 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0919 18:39:31.563421  761388 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:39:40.033093  761388 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:40.033184  761388 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:40.033278  761388 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:40.033324  761388 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0919 18:39:40.033356  761388 kubeadm.go:310] OS: Linux
	I0919 18:39:40.033398  761388 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:40.033437  761388 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:40.033482  761388 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:40.033521  761388 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:40.033566  761388 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:40.033607  761388 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:40.033655  761388 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:40.033699  761388 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:40.033736  761388 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:40.033793  761388 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:40.033891  761388 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:40.034008  761388 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:40.034100  761388 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:40.035787  761388 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:40.035950  761388 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:40.036208  761388 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:40.036312  761388 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:40.036391  761388 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:40.036476  761388 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:40.036548  761388 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:40.036641  761388 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:40.036746  761388 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.036794  761388 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:40.036940  761388 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.037024  761388 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:40.037075  761388 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:40.037112  761388 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:40.037161  761388 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:40.037201  761388 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:40.037258  761388 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:40.037338  761388 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:40.037448  761388 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:40.037533  761388 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:40.037626  761388 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:40.037718  761388 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:40.039316  761388 out.go:235]   - Booting up control plane ...
	I0919 18:39:40.039415  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:40.039524  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:40.039619  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:40.039728  761388 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:40.039841  761388 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:40.039909  761388 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:40.040093  761388 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:40.040237  761388 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:40.040290  761388 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.645723ms
	I0919 18:39:40.040356  761388 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:39:40.040404  761388 kubeadm.go:310] [api-check] The API server is healthy after 4.502008624s
	I0919 18:39:40.040492  761388 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:39:40.040605  761388 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:39:40.040687  761388 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:39:40.040875  761388 kubeadm.go:310] [mark-control-plane] Marking the node addons-685250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:39:40.040960  761388 kubeadm.go:310] [bootstrap-token] Using token: ijm4ly.86nu9uivdcvgfqko
	I0919 18:39:40.042478  761388 out.go:235]   - Configuring RBAC rules ...
	I0919 18:39:40.042563  761388 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:39:40.042634  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:39:40.042751  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:39:40.042898  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:39:40.043013  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:39:40.043111  761388 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:39:40.043261  761388 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:39:40.043324  761388 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:39:40.043388  761388 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:39:40.043398  761388 kubeadm.go:310] 
	I0919 18:39:40.043485  761388 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:39:40.043499  761388 kubeadm.go:310] 
	I0919 18:39:40.043591  761388 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:39:40.043599  761388 kubeadm.go:310] 
	I0919 18:39:40.043634  761388 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:39:40.043719  761388 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:39:40.043765  761388 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:39:40.043770  761388 kubeadm.go:310] 
	I0919 18:39:40.043812  761388 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:39:40.043817  761388 kubeadm.go:310] 
	I0919 18:39:40.043857  761388 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:39:40.043862  761388 kubeadm.go:310] 
	I0919 18:39:40.043902  761388 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:39:40.043999  761388 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:39:40.044089  761388 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:39:40.044096  761388 kubeadm.go:310] 
	I0919 18:39:40.044175  761388 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:39:40.044258  761388 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:39:40.044266  761388 kubeadm.go:310] 
	I0919 18:39:40.044382  761388 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044505  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 \
	I0919 18:39:40.044525  761388 kubeadm.go:310] 	--control-plane 
	I0919 18:39:40.044531  761388 kubeadm.go:310] 
	I0919 18:39:40.044599  761388 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:39:40.044606  761388 kubeadm.go:310] 
	I0919 18:39:40.044684  761388 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044851  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 
	I0919 18:39:40.044867  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:40.044876  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:40.046449  761388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:39:40.047787  761388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:39:40.051623  761388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:39:40.051638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:39:40.069179  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:39:40.264712  761388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:39:40.264794  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.264800  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685250 minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-685250 minikube.k8s.io/primary=true
	I0919 18:39:40.272124  761388 ops.go:34] apiserver oom_adj: -16
	I0919 18:39:40.450150  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.950813  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.450429  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.950463  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.450542  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.950992  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.451199  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.950242  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:44.012691  761388 kubeadm.go:1113] duration metric: took 3.747963897s to wait for elevateKubeSystemPrivileges
	I0919 18:39:44.012729  761388 kubeadm.go:394] duration metric: took 12.669506054s to StartCluster
	I0919 18:39:44.012758  761388 settings.go:142] acquiring lock: {Name:mkba96297ae0a710684a3a2a45be357ed7205f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.012903  761388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:44.013318  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/kubeconfig: {Name:mk7bd3287a61595c1c20478c3038a77f636ffaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.013536  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:39:44.013566  761388 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:44.013636  761388 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:39:44.013758  761388 addons.go:69] Setting yakd=true in profile "addons-685250"
	I0919 18:39:44.013778  761388 addons.go:69] Setting helm-tiller=true in profile "addons-685250"
	I0919 18:39:44.013797  761388 addons.go:69] Setting registry=true in profile "addons-685250"
	I0919 18:39:44.013801  761388 addons.go:69] Setting ingress=true in profile "addons-685250"
	I0919 18:39:44.013794  761388 addons.go:69] Setting metrics-server=true in profile "addons-685250"
	I0919 18:39:44.013782  761388 addons.go:234] Setting addon yakd=true in "addons-685250"
	I0919 18:39:44.013816  761388 addons.go:234] Setting addon ingress=true in "addons-685250"
	I0919 18:39:44.013818  761388 addons.go:69] Setting storage-provisioner=true in profile "addons-685250"
	I0919 18:39:44.013824  761388 addons.go:234] Setting addon metrics-server=true in "addons-685250"
	I0919 18:39:44.013824  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013835  761388 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685250"
	I0919 18:39:44.013850  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013852  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685250"
	I0919 18:39:44.013855  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013828  761388 addons.go:234] Setting addon storage-provisioner=true in "addons-685250"
	I0919 18:39:44.013859  761388 addons.go:69] Setting ingress-dns=true in profile "addons-685250"
	I0919 18:39:44.013875  761388 addons.go:69] Setting inspektor-gadget=true in profile "addons-685250"
	I0919 18:39:44.013891  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013904  761388 addons.go:69] Setting default-storageclass=true in profile "addons-685250"
	I0919 18:39:44.013905  761388 addons.go:69] Setting gcp-auth=true in profile "addons-685250"
	I0919 18:39:44.013920  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685250"
	I0919 18:39:44.013928  761388 mustload.go:65] Loading cluster: addons-685250
	I0919 18:39:44.013810  761388 addons.go:234] Setting addon helm-tiller=true in "addons-685250"
	I0919 18:39:44.013987  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014106  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013760  761388 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685250"
	I0919 18:39:44.014180  761388 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:44.014213  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014234  761388 addons.go:69] Setting volcano=true in profile "addons-685250"
	I0919 18:39:44.014289  761388 addons.go:234] Setting addon volcano=true in "addons-685250"
	I0919 18:39:44.014321  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014369  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014420  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014444  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014529  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014668  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014766  761388 addons.go:69] Setting volumesnapshots=true in profile "addons-685250"
	I0919 18:39:44.014784  761388 addons.go:234] Setting addon volumesnapshots=true in "addons-685250"
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014811  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014813  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013790  761388 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685250"
	I0919 18:39:44.014885  761388 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685250"
	I0919 18:39:44.014921  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013892  761388 addons.go:234] Setting addon ingress-dns=true in "addons-685250"
	I0919 18:39:44.015381  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.015478  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013782  761388 addons.go:69] Setting cloud-spanner=true in profile "addons-685250"
	I0919 18:39:44.015604  761388 addons.go:234] Setting addon cloud-spanner=true in "addons-685250"
	I0919 18:39:44.015632  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013894  761388 addons.go:234] Setting addon inspektor-gadget=true in "addons-685250"
	I0919 18:39:44.015698  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.016016  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016089  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.015481  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016191  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013861  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.017759  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.020298  761388 out.go:177] * Verifying Kubernetes components...
	I0919 18:39:44.015297  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013811  761388 addons.go:234] Setting addon registry=true in "addons-685250"
	I0919 18:39:44.026436  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.028211  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:44.037105  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.048567  761388 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:39:44.048657  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.050374  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:39:44.050397  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:39:44.050461  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.052343  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:39:44.060733  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.062707  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.062730  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:39:44.062789  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.081544  761388 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:39:44.081631  761388 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:39:44.083278  761388 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.083339  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:39:44.083408  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.086304  761388 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:39:44.086735  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:39:44.088743  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:39:44.088872  761388 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:39:44.091114  761388 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685250"
	I0919 18:39:44.091164  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.091489  761388 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:39:44.091508  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:39:44.091564  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.091649  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.091952  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.092800  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:39:44.092818  761388 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:39:44.092889  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.094032  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:39:44.101275  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:39:44.103871  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:39:44.106750  761388 addons.go:234] Setting addon default-storageclass=true in "addons-685250"
	I0919 18:39:44.106804  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.107282  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	W0919 18:39:44.109675  761388 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:39:44.110326  761388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:39:44.110334  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:39:44.112386  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.112408  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:39:44.112472  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.112565  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:39:44.113382  761388 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:39:44.114898  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:39:44.114906  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:39:44.114925  761388 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:39:44.114984  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.116662  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:39:44.116682  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:39:44.116748  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.119259  761388 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:39:44.120516  761388 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:39:44.120540  761388 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:39:44.120610  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.123773  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.136078  761388 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:39:44.138681  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.138709  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:39:44.138773  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.144207  761388 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:39:44.145527  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.145578  761388 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:39:44.146995  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:44.147017  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:39:44.147076  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.152809  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.156308  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:39:44.157886  761388 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:39:44.157903  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:39:44.157925  761388 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:39:44.157985  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.162886  761388 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.162909  761388 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:39:44.162966  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.163450  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.166881  761388 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:44.166906  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:39:44.166969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.172034  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.180781  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:39:44.183673  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.189557  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.190040  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.198542  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.202993  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.203703  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.205321  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.208823  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.209666  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	W0919 18:39:44.241755  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241799  761388 retry.go:31] will retry after 368.513545ms: ssh: handshake failed: EOF
	W0919 18:39:44.241901  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241912  761388 retry.go:31] will retry after 353.358743ms: ssh: handshake failed: EOF
	W0919 18:39:44.241992  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.242019  761388 retry.go:31] will retry after 239.291473ms: ssh: handshake failed: EOF
	I0919 18:39:44.351392  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:44.437649  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.536099  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.541975  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:39:44.542004  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:39:44.544666  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.646013  761388 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:39:44.646047  761388 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:39:44.743483  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.743812  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:39:44.743879  761388 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:39:44.839790  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:39:44.839821  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:39:44.840867  761388 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:39:44.840892  761388 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:39:44.844891  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:44.844913  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:39:44.859724  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:39:44.859754  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:39:44.945601  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.948297  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:39:44.948369  761388 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:39:44.953207  761388 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:44.953285  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:39:45.049434  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:45.049642  761388 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:39:45.049698  761388 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:39:45.055848  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:39:45.055950  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:39:45.058998  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:39:45.059024  761388 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:39:45.141944  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:39:45.141986  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:39:45.156162  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:45.246810  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:39:45.246840  761388 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:39:45.256490  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:45.437813  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:45.441833  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.441871  761388 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:39:45.549176  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:39:45.549265  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:39:45.637502  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:39:45.637591  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:39:45.642826  761388 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2913856s)
	I0919 18:39:45.644038  761388 node_ready.go:35] waiting up to 6m0s for node "addons-685250" to be "Ready" ...
	I0919 18:39:45.644391  761388 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463571637s)
	I0919 18:39:45.644468  761388 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:39:45.647199  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.647259  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:39:45.737336  761388 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:39:45.737429  761388 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:39:45.754802  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:39:45.754834  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:39:45.836195  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:39:45.836236  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:39:45.851797  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.936024  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.956936  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:39:45.956972  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:39:46.159873  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:39:46.159908  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:39:46.337448  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:39:46.337478  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:39:46.356760  761388 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685250" context rescaled to 1 replicas
	I0919 18:39:46.436892  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:39:46.436928  761388 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:39:46.537037  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:39:46.537072  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:39:46.746236  761388 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:46.746266  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:39:46.854918  761388 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:39:46.855018  761388 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:39:46.946936  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:39:46.946983  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:39:47.236798  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:39:47.236841  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:39:47.246825  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:47.257114  761388 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.257149  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:39:47.453170  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.542740  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:39:47.542772  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:39:47.659810  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:47.759785  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:47.759819  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:39:47.957548  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:50.147172  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:50.150873  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713170158s)
	I0919 18:39:50.150919  761388 addons.go:475] Verifying addon ingress=true in "addons-685250"
	I0919 18:39:50.150938  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.614729552s)
	I0919 18:39:50.151045  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.606300895s)
	I0919 18:39:50.151091  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.407584065s)
	I0919 18:39:50.151204  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.205541455s)
	I0919 18:39:50.151283  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.101743958s)
	I0919 18:39:50.151334  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.995098572s)
	I0919 18:39:50.151399  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.89486624s)
	I0919 18:39:50.151505  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.713655603s)
	I0919 18:39:50.151528  761388 addons.go:475] Verifying addon registry=true in "addons-685250"
	I0919 18:39:50.151594  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.29976078s)
	I0919 18:39:50.151618  761388 addons.go:475] Verifying addon metrics-server=true in "addons-685250"
	I0919 18:39:50.151657  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.215596812s)
	I0919 18:39:50.152907  761388 out.go:177] * Verifying ingress addon...
	I0919 18:39:50.153936  761388 out.go:177] * Verifying registry addon...
	I0919 18:39:50.153951  761388 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685250 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:39:50.155824  761388 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:39:50.157505  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0919 18:39:50.163513  761388 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:39:50.238665  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:39:50.238695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.238959  761388 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:39:50.238987  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.660404  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.662046  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.877367  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.630488674s)
	W0919 18:39:50.877434  761388 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877461  761388 retry.go:31] will retry after 374.811419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877563  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.424342572s)
	I0919 18:39:51.159983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.160342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.251656  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.294045721s)
	I0919 18:39:51.251706  761388 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:51.252726  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:51.253330  761388 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:39:51.255845  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:39:51.260109  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:39:51.260134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:51.299405  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:39:51.299470  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.319259  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.435849  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:39:51.455177  761388 addons.go:234] Setting addon gcp-auth=true in "addons-685250"
	I0919 18:39:51.455235  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:51.455622  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:51.473709  761388 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:39:51.473768  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.492852  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.660242  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.660451  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.763672  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.148125  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:52.160486  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.160637  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.260177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.659866  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.759357  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.159414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.160699  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.260412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.660465  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.660995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.760079  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.036339  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783560208s)
	I0919 18:39:54.036401  761388 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.56265651s)
	I0919 18:39:54.037930  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:54.039158  761388 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:39:54.040281  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:39:54.040295  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:39:54.060953  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:39:54.060982  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:39:54.078061  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.078081  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:39:54.096196  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.159825  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.161174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.259118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.649396  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:54.664552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.666437  761388 addons.go:475] Verifying addon gcp-auth=true in "addons-685250"
	I0919 18:39:54.666458  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.669012  761388 out.go:177] * Verifying gcp-auth addon...
	I0919 18:39:54.671405  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:39:54.762155  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.762165  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:39:54.762193  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.159689  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.161131  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.174401  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.259291  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.659983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.758821  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.159552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.161022  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.174326  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.259237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.660149  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.660452  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.675011  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.759761  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.147230  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:57.160802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.160843  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.174625  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.259483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.659641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.660974  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.674433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.759804  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.159364  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.160396  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.175074  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.258973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.659663  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.659995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.674333  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.759220  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.159931  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.160111  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.174241  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.259030  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.647936  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:59.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.674569  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.759432  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.160240  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.160488  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.174961  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.259892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.660179  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.660554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.675141  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.758994  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.174593  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.259801  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.659777  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.660892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.674204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.759169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.147887  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:02.160172  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.160247  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.174624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.259598  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.659674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.660694  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.674100  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.759727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.159593  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.160617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.174020  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.259297  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.660462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.660957  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.674094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.759774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.159328  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.160575  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.174927  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.259749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.647664  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:04.659478  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.661089  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.759138  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.160148  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.160420  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.174732  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.259905  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.659969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.660156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.674731  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.759280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.160047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.160189  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.174412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.259142  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.660052  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.660419  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.674781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.759973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.147840  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:07.159737  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.160196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.174616  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.259365  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.659184  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.660781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.674067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.758888  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.160134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.160271  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.174692  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.259835  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.659150  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.660428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.674754  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.759483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.159321  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.160653  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.175114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.260634  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.647196  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:09.659462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.660545  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.674993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.759810  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.159952  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.161096  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.174611  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.259487  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.659118  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.660327  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.674867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.759802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.159342  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.160885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.173987  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.259734  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.647819  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:11.659862  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.660211  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.674274  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.759168  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.160283  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.160439  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.175052  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.260097  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.659816  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.660819  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.674404  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.759164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.160264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.160357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.174537  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.259736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.660466  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.660513  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.674991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.759495  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.146772  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:14.159525  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.159867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.174094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.260124  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.660152  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.660362  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.674852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.759444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.159996  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.160894  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.174310  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.259417  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.659374  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.660883  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.674695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.759222  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.147487  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:16.159970  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.160975  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.174207  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.258997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.660164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.660247  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.674461  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.759434  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.160167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.160211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.658940  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.660444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.674638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.759422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.159603  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.160463  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.174991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.258926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.647877  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:18.660091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.660270  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.759470  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.160102  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.160359  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.174708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.259350  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.659690  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.660560  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.673993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.759643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.159760  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.160739  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.174018  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.259759  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.659618  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.660617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.673972  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.759708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.147628  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:21.159869  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.161165  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.174520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.259323  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.659211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.660585  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.673760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.159736  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.160153  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.174301  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.259002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.659694  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.661106  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.674760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.759413  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.159284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.160467  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.174960  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.259223  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.647843  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:23.659948  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.659983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.674196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.758885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.159695  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.160775  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.174128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.260104  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.660632  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.661828  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.674068  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.759900  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.159730  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.160014  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.174822  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.259570  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.659440  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.660392  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.674818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.759718  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.147606  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:26.159628  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.161042  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.174701  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.259645  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.661426  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.662087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.674503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.759217  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.159812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.160262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.174635  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.259405  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.659575  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.660727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.674227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.759021  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.147837  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:28.160082  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.160114  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.174316  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.646812  761388 node_ready.go:49] node "addons-685250" has status "Ready":"True"
	I0919 18:40:28.646840  761388 node_ready.go:38] duration metric: took 43.002724586s for node "addons-685250" to be "Ready" ...
	I0919 18:40:28.646862  761388 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:28.657370  761388 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:28.665479  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:28.665601  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.666301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.673925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.761809  761388 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:28.761844  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.160890  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.161414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.174200  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.262793  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.666949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.668214  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.673941  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.760517  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.160901  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.165455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.238277  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.261435  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.665010  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.665243  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.740441  761388 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.740475  761388 pod_ready.go:82] duration metric: took 2.083070651s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740502  761388 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.749009  761388 pod_ready.go:93] pod "etcd-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.749034  761388 pod_ready.go:82] duration metric: took 8.524276ms for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.749051  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755475  761388 pod_ready.go:93] pod "kube-apiserver-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.755499  761388 pod_ready.go:82] duration metric: took 6.439358ms for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755513  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837071  761388 pod_ready.go:93] pod "kube-controller-manager-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.837158  761388 pod_ready.go:82] duration metric: took 81.634686ms for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837180  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.842181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.843110  761388 pod_ready.go:93] pod "kube-proxy-tt5h8" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.843130  761388 pod_ready.go:82] duration metric: took 5.940025ms for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.843141  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064216  761388 pod_ready.go:93] pod "kube-scheduler-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:31.064250  761388 pod_ready.go:82] duration metric: took 221.10192ms for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064264  761388 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.160309  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.161868  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.175154  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.261445  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.661945  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.662739  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.674262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.764171  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.160964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.161120  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.175453  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.261255  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.660913  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.661774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.675133  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.760592  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.070854  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:33.161051  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.161301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.175286  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.260865  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.660702  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.661852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.675273  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.760668  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.160546  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.161086  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.174285  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.260753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.661118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.661516  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.675418  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.760922  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.071857  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:35.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.160768  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.175281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.260345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.660487  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.661415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.674901  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.760686  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.160095  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.161029  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.174515  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.260186  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.660284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.661541  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.674751  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.760998  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.160677  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.160812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.174659  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.260012  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.569850  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:37.660726  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.661114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.674871  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.762472  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.160011  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.161167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.236912  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.261156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.660760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.661073  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.675428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.760681  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.160674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.161278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.174402  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.259952  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.570471  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:39.660746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.661314  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.675826  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.760609  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.160453  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.161002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.175034  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.261000  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.660533  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.661321  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.760519  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.160473  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.161342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.174400  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.259949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.570843  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:41.660891  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.661331  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.675442  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.761658  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.159681  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.161135  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.175056  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.260520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.660591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.660622  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.675267  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.761379  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.160638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.161031  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.241441  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.261128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.641195  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:43.660811  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.660936  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.674877  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.761319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.160296  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.161343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.174926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.260471  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.660490  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.661342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.674851  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.760497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.160507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.160595  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.174852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.260568  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.660293  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.660999  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.674670  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.761087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.070190  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:46.160550  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.160867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.174270  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.260149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.660826  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.661696  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.676864  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.760955  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.160938  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.161615  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.175003  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.260783  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.660110  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.663272  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.701700  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.760283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.159939  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.160947  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.174393  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.261025  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.570860  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:48.660740  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.661222  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.761763  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.160005  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.160755  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.175182  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.260174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.661013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.661304  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.675895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.777512  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.160946  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.160950  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.174204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.259800  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.660357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.661468  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.674771  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.760091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.069537  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:51.160657  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.161375  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.174522  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.260449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.660943  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.661436  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.679949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.760555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.160884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.161969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.175511  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.260422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.660009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.661427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.674747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.760455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.069882  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:53.160723  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.160847  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.175048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.260265  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.660742  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.660975  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.675736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.760427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.160554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.175527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.261623  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.661044  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.661280  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.674256  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.762345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.161624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.161856  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.177557  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.260964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.571599  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:55.660145  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.661293  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.674636  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.760666  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.160746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.161295  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.174304  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.259893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.660305  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.661330  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.759937  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.161201  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.161367  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.174319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.259921  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.660452  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.661521  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.675492  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.760449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.071078  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:58.166319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.167684  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.174484  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.261744  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.739476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.740647  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.741278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.843925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.250851  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.348633  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.349162  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.352318  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.660355  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.662169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.737125  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.761343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.071258  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:00.161047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.161410  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.175212  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.261071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.661009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.662071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.674963  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.761260  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.160995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.161522  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.174377  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.261177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.660419  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.661825  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.675387  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.760448  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.071634  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:02.160982  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.161497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.175139  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.262015  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.660625  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.661137  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.676415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.760266  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.160315  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.161430  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.174874  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.260917  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.660127  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.661283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.760962  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.761328  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.160941  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.161529  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.175159  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.260532  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.570304  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:04.660567  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.661503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.675149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.761527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.160742  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.161438  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.175035  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.260884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.660133  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.661095  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.674647  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.760505  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.160998  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.161237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.175185  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.261772  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.570424  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:06.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.661433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.675129  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.761340  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.160439  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.161643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.175553  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.260491  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.661227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.661700  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.674758  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.769893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.160882  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.161229  761388 kapi.go:107] duration metric: took 1m18.003722545s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:41:08.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.260993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.570813  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:08.661066  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.675397  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.761869  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.163441  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.260343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.261680  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.661162  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.738749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.761895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.161848  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.174642  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.261127  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.638793  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:10.660408  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.737983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.761997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.160636  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.238753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.260239  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.661077  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.675809  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.760946  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.160226  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.174555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.260120  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.660888  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.675281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.070755  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:13.159900  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.175280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.260711  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.674228  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.675067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.761264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.160557  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.174803  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.260591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.675045  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.761376  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.070790  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:15.161017  761388 kapi.go:107] duration metric: took 1m25.005187502s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:15.174846  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.261085  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.675476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.837474  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.268231  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.268764  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.676196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.760827  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.176212  761388 kapi.go:107] duration metric: took 1m22.504803809s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:41:17.177857  761388 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685250 cluster.
	I0919 18:41:17.179198  761388 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:41:17.180644  761388 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:41:17.262198  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.570361  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:17.760518  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.261747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.761118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.260370  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.570826  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:19.761115  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.260708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.761013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.260276  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.571353  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:21.760456  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.260815  761388 kapi.go:107] duration metric: took 1m31.004968765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:41:22.262816  761388 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:41:22.264198  761388 addons.go:510] duration metric: took 1m38.250564753s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:41:24.069345  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:26.070338  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:28.571150  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:31.069639  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:33.069801  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:35.069951  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070152  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.570142  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.570373  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:44.069797  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.070575  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.570352  761388 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.570378  761388 pod_ready.go:82] duration metric: took 1m15.506104425s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.570389  761388 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574639  761388 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.574659  761388 pod_ready.go:82] duration metric: took 4.26409ms for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574677  761388 pod_ready.go:39] duration metric: took 1m17.927800889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:46.574695  761388 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:41:46.574727  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:46.574775  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:46.610505  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:46.610525  761388 cri.go:89] found id: ""
	I0919 18:41:46.610532  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:46.610585  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.614097  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:46.614166  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:46.647964  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:46.647984  761388 cri.go:89] found id: ""
	I0919 18:41:46.647992  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:46.648034  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.651737  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:46.651827  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:46.685728  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:46.685751  761388 cri.go:89] found id: ""
	I0919 18:41:46.685761  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:46.685842  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.689509  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:46.689602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:46.723120  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:46.723148  761388 cri.go:89] found id: ""
	I0919 18:41:46.723159  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:46.723206  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.726505  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:46.726561  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:46.764041  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.764067  761388 cri.go:89] found id: ""
	I0919 18:41:46.764076  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:46.764139  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.767386  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:46.767456  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:46.801334  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:46.801362  761388 cri.go:89] found id: ""
	I0919 18:41:46.801373  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:46.801437  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.804747  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:46.804810  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:46.838269  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:46.838289  761388 cri.go:89] found id: ""
	I0919 18:41:46.838297  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:46.838353  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.841583  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:46.841608  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:46.939796  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:46.939825  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.973962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:46.973996  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:47.040527  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:47.040563  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:47.079512  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:47.079548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:47.156835  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:47.156873  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:47.244389  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:47.244425  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:47.291698  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:47.291734  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:47.339857  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:47.339892  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:47.378377  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:47.378414  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:47.419595  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:47.419631  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:47.461066  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:47.461101  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:49.991902  761388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:41:50.006246  761388 api_server.go:72] duration metric: took 2m5.992641544s to wait for apiserver process to appear ...
	I0919 18:41:50.006277  761388 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:41:50.006316  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:50.006369  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:50.040275  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.040319  761388 cri.go:89] found id: ""
	I0919 18:41:50.040329  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:50.040373  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.043705  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:50.043766  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:50.078798  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.078819  761388 cri.go:89] found id: ""
	I0919 18:41:50.078826  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:50.078884  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.082274  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:50.082341  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:50.116003  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.116024  761388 cri.go:89] found id: ""
	I0919 18:41:50.116032  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:50.116082  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.119438  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:50.119496  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:50.153370  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.153390  761388 cri.go:89] found id: ""
	I0919 18:41:50.153398  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:50.153451  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.156934  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:50.156999  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:50.191346  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.191372  761388 cri.go:89] found id: ""
	I0919 18:41:50.191381  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:50.191442  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.195442  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:50.195523  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:50.230094  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.230116  761388 cri.go:89] found id: ""
	I0919 18:41:50.230126  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:50.230173  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.233591  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:50.233648  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:50.267946  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.267968  761388 cri.go:89] found id: ""
	I0919 18:41:50.267976  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:50.268020  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.271492  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:50.271521  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.315171  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:50.315204  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.350242  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:50.350276  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.406986  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:50.407024  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.443914  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:50.443950  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:50.522117  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:50.522161  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:50.603999  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:50.604036  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:50.633867  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:50.633909  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:50.735662  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:50.735694  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.778766  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:50.778800  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.822323  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:50.822362  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.858212  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:50.858244  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.402426  761388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:41:53.406334  761388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:41:53.407293  761388 api_server.go:141] control plane version: v1.31.1
	I0919 18:41:53.407337  761388 api_server.go:131] duration metric: took 3.401052443s to wait for apiserver health ...
	I0919 18:41:53.407348  761388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:41:53.407372  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:53.407424  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:53.442342  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:53.442368  761388 cri.go:89] found id: ""
	I0919 18:41:53.442378  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:53.442443  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.445843  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:53.445911  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:53.479392  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:53.479417  761388 cri.go:89] found id: ""
	I0919 18:41:53.479427  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:53.479483  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.482761  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:53.482821  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:53.517132  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.517157  761388 cri.go:89] found id: ""
	I0919 18:41:53.517169  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:53.517224  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.520542  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:53.520602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:53.554085  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.554107  761388 cri.go:89] found id: ""
	I0919 18:41:53.554116  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:53.554174  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.557699  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:53.557779  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:53.591682  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:53.591703  761388 cri.go:89] found id: ""
	I0919 18:41:53.591711  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:53.591755  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.595094  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:53.595172  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:53.630170  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.630192  761388 cri.go:89] found id: ""
	I0919 18:41:53.630199  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:53.630257  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.633583  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:53.633636  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:53.667431  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.667451  761388 cri.go:89] found id: ""
	I0919 18:41:53.667459  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:53.667505  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.670883  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:53.670906  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.707961  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:53.707993  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.749962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:53.749997  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.808507  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:53.808548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.843831  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:53.843860  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.886934  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:53.886962  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:53.965269  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:53.965305  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:54.000130  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:54.000165  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:54.102256  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:54.102283  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:54.180041  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:54.180082  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:54.225323  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:54.225355  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:54.270873  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:54.270914  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:56.816722  761388 system_pods.go:59] 19 kube-system pods found
	I0919 18:41:56.816754  761388 system_pods.go:61] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.816759  761388 system_pods.go:61] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.816763  761388 system_pods.go:61] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.816767  761388 system_pods.go:61] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.816770  761388 system_pods.go:61] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.816773  761388 system_pods.go:61] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.816777  761388 system_pods.go:61] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.816780  761388 system_pods.go:61] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.816783  761388 system_pods.go:61] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.816787  761388 system_pods.go:61] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.816791  761388 system_pods.go:61] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.816796  761388 system_pods.go:61] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.816800  761388 system_pods.go:61] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.816805  761388 system_pods.go:61] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.816814  761388 system_pods.go:61] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.816821  761388 system_pods.go:61] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.816825  761388 system_pods.go:61] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.816831  761388 system_pods.go:61] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.816836  761388 system_pods.go:61] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.816844  761388 system_pods.go:74] duration metric: took 3.409487976s to wait for pod list to return data ...
	I0919 18:41:56.816856  761388 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:41:56.819044  761388 default_sa.go:45] found service account: "default"
	I0919 18:41:56.819064  761388 default_sa.go:55] duration metric: took 2.201823ms for default service account to be created ...
	I0919 18:41:56.819072  761388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:41:56.827195  761388 system_pods.go:86] 19 kube-system pods found
	I0919 18:41:56.827219  761388 system_pods.go:89] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.827224  761388 system_pods.go:89] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.827229  761388 system_pods.go:89] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.827232  761388 system_pods.go:89] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.827236  761388 system_pods.go:89] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.827239  761388 system_pods.go:89] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.827243  761388 system_pods.go:89] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.827246  761388 system_pods.go:89] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.827250  761388 system_pods.go:89] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.827254  761388 system_pods.go:89] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.827258  761388 system_pods.go:89] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.827261  761388 system_pods.go:89] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.827264  761388 system_pods.go:89] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.827267  761388 system_pods.go:89] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.827270  761388 system_pods.go:89] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.827273  761388 system_pods.go:89] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.827276  761388 system_pods.go:89] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.827279  761388 system_pods.go:89] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.827282  761388 system_pods.go:89] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.827287  761388 system_pods.go:126] duration metric: took 8.210478ms to wait for k8s-apps to be running ...
	I0919 18:41:56.827294  761388 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:41:56.827364  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:41:56.838722  761388 system_svc.go:56] duration metric: took 11.419899ms WaitForService to wait for kubelet
	I0919 18:41:56.838749  761388 kubeadm.go:582] duration metric: took 2m12.825152378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:41:56.838775  761388 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:41:56.841799  761388 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:41:56.841823  761388 node_conditions.go:123] node cpu capacity is 8
	I0919 18:41:56.841837  761388 node_conditions.go:105] duration metric: took 3.056374ms to run NodePressure ...
	I0919 18:41:56.841850  761388 start.go:241] waiting for startup goroutines ...
	I0919 18:41:56.841857  761388 start.go:246] waiting for cluster config update ...
	I0919 18:41:56.841872  761388 start.go:255] writing updated cluster config ...
	I0919 18:41:56.842127  761388 ssh_runner.go:195] Run: rm -f paused
	I0919 18:41:56.891468  761388 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:41:56.894630  761388 out.go:177] * Done! kubectl is now configured to use "addons-685250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:54:38 addons-685250 crio[1028]: time="2024-09-19 18:54:38.353714883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d0ffc8cb-396a-474b-95e8-0ae3d701ec83 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:38 addons-685250 crio[1028]: time="2024-09-19 18:54:38.353976411Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d0ffc8cb-396a-474b-95e8-0ae3d701ec83 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:43 addons-685250 crio[1028]: time="2024-09-19 18:54:43.354195861Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9ce0102f-33af-43ab-b3a9-9cc563d6eb33 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:43 addons-685250 crio[1028]: time="2024-09-19 18:54:43.354462712Z" level=info msg="Image docker.io/nginx:latest not found" id=9ce0102f-33af-43ab-b3a9-9cc563d6eb33 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:49 addons-685250 crio[1028]: time="2024-09-19 18:54:49.354768267Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e80c0719-8d6b-4733-baae-9d1634175472 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:49 addons-685250 crio[1028]: time="2024-09-19 18:54:49.354992357Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e80c0719-8d6b-4733-baae-9d1634175472 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:54 addons-685250 crio[1028]: time="2024-09-19 18:54:54.354474527Z" level=info msg="Checking image status: docker.io/nginx:latest" id=63fbea10-e6b7-45e2-8dac-e804823d915f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:54:54 addons-685250 crio[1028]: time="2024-09-19 18:54:54.354785101Z" level=info msg="Image docker.io/nginx:latest not found" id=63fbea10-e6b7-45e2-8dac-e804823d915f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:00 addons-685250 crio[1028]: time="2024-09-19 18:55:00.354133881Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d7ae5ef6-b69b-49b5-9f9f-4e094d945fce name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:00 addons-685250 crio[1028]: time="2024-09-19 18:55:00.354340734Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d7ae5ef6-b69b-49b5-9f9f-4e094d945fce name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:09 addons-685250 crio[1028]: time="2024-09-19 18:55:09.354723903Z" level=info msg="Checking image status: docker.io/nginx:latest" id=84cd2ee1-fc32-44a6-843a-d91f93d0a88e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:09 addons-685250 crio[1028]: time="2024-09-19 18:55:09.354968263Z" level=info msg="Image docker.io/nginx:latest not found" id=84cd2ee1-fc32-44a6-843a-d91f93d0a88e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:15 addons-685250 crio[1028]: time="2024-09-19 18:55:15.354041702Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72410eee-76e9-4893-aa77-818ccf927909 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:15 addons-685250 crio[1028]: time="2024-09-19 18:55:15.354352870Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=72410eee-76e9-4893-aa77-818ccf927909 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:21 addons-685250 crio[1028]: time="2024-09-19 18:55:21.354336894Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=79cd5378-16c5-4c2e-a1f7-47e3d6c92751 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:21 addons-685250 crio[1028]: time="2024-09-19 18:55:21.354332997Z" level=info msg="Checking image status: docker.io/nginx:latest" id=79a07341-8c6f-40e8-8d50-8a30f2e20937 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:21 addons-685250 crio[1028]: time="2024-09-19 18:55:21.354620408Z" level=info msg="Image docker.io/nginx:alpine not found" id=79cd5378-16c5-4c2e-a1f7-47e3d6c92751 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:21 addons-685250 crio[1028]: time="2024-09-19 18:55:21.354633134Z" level=info msg="Image docker.io/nginx:latest not found" id=79a07341-8c6f-40e8-8d50-8a30f2e20937 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:27 addons-685250 crio[1028]: time="2024-09-19 18:55:27.354501233Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a9ee235-f61e-4878-8737-0214c4d4ba9d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:27 addons-685250 crio[1028]: time="2024-09-19 18:55:27.354860914Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4a9ee235-f61e-4878-8737-0214c4d4ba9d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:32 addons-685250 crio[1028]: time="2024-09-19 18:55:32.354537360Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=20137872-77a6-4e5c-bc02-850e14458e53 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:32 addons-685250 crio[1028]: time="2024-09-19 18:55:32.354820220Z" level=info msg="Image docker.io/nginx:alpine not found" id=20137872-77a6-4e5c-bc02-850e14458e53 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:35 addons-685250 crio[1028]: time="2024-09-19 18:55:35.354200822Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d4c361dc-ad39-4df3-9b36-82adc774d71d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:35 addons-685250 crio[1028]: time="2024-09-19 18:55:35.354463272Z" level=info msg="Image docker.io/nginx:latest not found" id=d4c361dc-ad39-4df3-9b36-82adc774d71d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:36 addons-685250 crio[1028]: time="2024-09-19 18:55:36.843874026Z" level=info msg="Stopping container: 3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47 (timeout: 30s)" id=b5b898f6-90b6-4782-b264-e3c9c1cc076f name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9631f3dbcf504       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          14 minutes ago      Running             csi-snapshotter                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	96030830b51d1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 minutes ago      Running             csi-provisioner                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	32bc4d23668fc       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 minutes ago      Running             liveness-probe                           0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	0cc2312cf82a4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 minutes ago      Running             hostpath                                 0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	8763c1c636d0e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 14 minutes ago      Running             gcp-auth                                 0                   c4905e6f06668       gcp-auth-89d5ffd79-5xmj7
	6ec44220259bc       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             14 minutes ago      Running             controller                               0                   7eeed172b87cd       ingress-nginx-controller-bc57996ff-jwqfz
	533fe244bc19f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                14 minutes ago      Running             node-driver-registrar                    0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	781e8a586344e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              14 minutes ago      Running             csi-resizer                              0                   79d20db0c7bd8       csi-hostpath-resizer-0
	135118d48b8e5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   14 minutes ago      Exited              patch                                    0                   b5047ec8d653b       ingress-nginx-admission-patch-zkk9z
	6148ff93b7e21       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      14 minutes ago      Running             volume-snapshot-controller               0                   2c111431a9537       snapshot-controller-56fcc65765-hpwtx
	776cccb0a5bb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   14 minutes ago      Running             csi-external-health-monitor-controller   0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	ae42c7830ff31       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      14 minutes ago      Running             volume-snapshot-controller               0                   a67d1128cd369       snapshot-controller-56fcc65765-qsngh
	3bae675b3b545       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   14 minutes ago      Exited              create                                   0                   00fa51ee04653       ingress-nginx-admission-create-rqqsb
	3def0c19497bb       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        14 minutes ago      Running             metrics-server                           0                   4dc38a01fe945       metrics-server-84c5f94fbc-gpv2k
	cd361280e82f5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   995144454e795       csi-hostpath-attacher-0
	71455e9d9d7f9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             15 minutes ago      Running             minikube-ingress-dns                     0                   1b3ebc5c0bddd       kube-ingress-dns-minikube
	c265d33c64155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             15 minutes ago      Running             storage-provisioner                      0                   f0b8765d93237       storage-provisioner
	61dc325585534       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             15 minutes ago      Running             coredns                                  0                   70191f5a80edd       coredns-7c65d6cfc9-xxkrh
	28c707c30998a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             15 minutes ago      Running             kindnet-cni                              0                   d0d4a24bd5f33       kindnet-nr24c
	1577029617c13       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             15 minutes ago      Running             kube-proxy                               0                   006fe668e3bca       kube-proxy-tt5h8
	a9c5d6500618f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             16 minutes ago      Running             kube-scheduler                           0                   6a497d68d67db       kube-scheduler-addons-685250
	4b38bddc95b37       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             16 minutes ago      Running             kube-controller-manager                  0                   8dc935b2a1118       kube-controller-manager-addons-685250
	daa04e6dadb8c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             16 minutes ago      Running             etcd                                     0                   49d2cd4b861cb       etcd-addons-685250
	d48e736f52b35       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             16 minutes ago      Running             kube-apiserver                           0                   ee84a44e45fe4       kube-apiserver-addons-685250
	
	
	==> coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] <==
	[INFO] 10.244.0.18:34436 - 35698 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108309s
	[INFO] 10.244.0.18:53834 - 64751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039533s
	[INFO] 10.244.0.18:53834 - 26861 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063287s
	[INFO] 10.244.0.18:40724 - 19030 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005948549s
	[INFO] 10.244.0.18:40724 - 2384 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00624164s
	[INFO] 10.244.0.18:55178 - 49717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004779846s
	[INFO] 10.244.0.18:55178 - 43576 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008989283s
	[INFO] 10.244.0.18:35236 - 29185 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005503532s
	[INFO] 10.244.0.18:35236 - 29053 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006569969s
	[INFO] 10.244.0.18:58901 - 23064 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007067s
	[INFO] 10.244.0.18:58901 - 45339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090322s
	[INFO] 10.244.0.21:52948 - 4177 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227224s
	[INFO] 10.244.0.21:45787 - 22571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317788s
	[INFO] 10.244.0.21:59704 - 52899 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152904s
	[INFO] 10.244.0.21:50018 - 4022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239218s
	[INFO] 10.244.0.21:53553 - 39101 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141888s
	[INFO] 10.244.0.21:37741 - 20732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000217668s
	[INFO] 10.244.0.21:55394 - 50618 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005906983s
	[INFO] 10.244.0.21:37603 - 64460 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00595091s
	[INFO] 10.244.0.21:43538 - 27403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006051611s
	[INFO] 10.244.0.21:54216 - 9854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00637344s
	[INFO] 10.244.0.21:36139 - 65099 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007481578s
	[INFO] 10.244.0.21:49105 - 14009 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010893085s
	[INFO] 10.244.0.21:52556 - 17077 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000849386s
	[INFO] 10.244.0.21:56780 - 3812 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000933647s
	
	
	==> describe nodes <==
	Name:               addons-685250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-685250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685250
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-685250"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685250
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:55:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-685250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 59964951ae744ca891a1d33d48395cb6
	  System UUID:                ca4c5e3c-dd72-4ffd-b420-cdf7d87c497b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  gcp-auth                    gcp-auth-89d5ffd79-5xmj7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jwqfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-xxkrh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-wvvls                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-685250                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-nr24c                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-685250                250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-685250       200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-tt5h8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-685250                100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-56fcc65765-hpwtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-56fcc65765-qsngh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m                kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node addons-685250 event: Registered Node addons-685250 in Controller
	  Normal   NodeReady                15m                kubelet          Node addons-685250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 9c 9b da 37 02 42 c0 a8 55 02 08 00
	[ +49.810034] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] <==
	{"level":"info","ts":"2024-09-19T18:39:45.855653Z","caller":"traceutil/trace.go:171","msg":"trace[11607049] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"105.61545ms","start":"2024-09-19T18:39:45.750016Z","end":"2024-09-19T18:39:45.855632Z","steps":["trace[11607049] 'read index received'  (duration: 86.226896ms)","trace[11607049] 'applied index is now lower than readState.Index'  (duration: 19.387979ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:45.855963Z","caller":"traceutil/trace.go:171","msg":"trace[722294032] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"106.750007ms","start":"2024-09-19T18:39:45.749192Z","end":"2024-09-19T18:39:45.855942Z","steps":["trace[722294032] 'process raft request'  (duration: 100.852428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.988653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-19T18:39:45.856224Z","caller":"traceutil/trace.go:171","msg":"trace[83912261] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"202.035355ms","start":"2024-09-19T18:39:45.654180Z","end":"2024-09-19T18:39:45.856215Z","steps":["trace[83912261] 'agreement among raft nodes before linearized reading'  (duration: 201.947574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.947549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856402Z","caller":"traceutil/trace.go:171","msg":"trace[297556485] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:395; }","duration":"206.977474ms","start":"2024-09-19T18:39:45.649415Z","end":"2024-09-19T18:39:45.856393Z","steps":["trace[297556485] 'agreement among raft nodes before linearized reading'  (duration: 206.93087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.416757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856554Z","caller":"traceutil/trace.go:171","msg":"trace[47804488] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:395; }","duration":"103.442648ms","start":"2024-09-19T18:39:45.753105Z","end":"2024-09-19T18:39:45.856548Z","steps":["trace[47804488] 'agreement among raft nodes before linearized reading'  (duration: 103.402348ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.450928Z","caller":"traceutil/trace.go:171","msg":"trace[447015363] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"192.15555ms","start":"2024-09-19T18:39:46.258754Z","end":"2024-09-19T18:39:46.450910Z","steps":["trace[447015363] 'process raft request'  (duration: 192.041293ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.457451Z","caller":"traceutil/trace.go:171","msg":"trace[199583041] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.841342ms","start":"2024-09-19T18:39:46.354595Z","end":"2024-09-19T18:39:46.457437Z","steps":["trace[199583041] 'process raft request'  (duration: 102.766841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:47.149186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.608135ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005940909206 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" mod_revision:386 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" value_size:3943 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:47.149875Z","caller":"traceutil/trace.go:171","msg":"trace[786871471] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"212.562991ms","start":"2024-09-19T18:39:46.937292Z","end":"2024-09-19T18:39:47.149855Z","steps":["trace[786871471] 'process raft request'  (duration: 110.633244ms)","trace[786871471] 'compare'  (duration: 100.378906ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:47.150124Z","caller":"traceutil/trace.go:171","msg":"trace[713102619] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"212.118368ms","start":"2024-09-19T18:39:46.937993Z","end":"2024-09-19T18:39:47.150111Z","steps":["trace[713102619] 'process raft request'  (duration: 211.29202ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150315Z","caller":"traceutil/trace.go:171","msg":"trace[1466387580] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"203.943604ms","start":"2024-09-19T18:39:46.946361Z","end":"2024-09-19T18:39:47.150305Z","steps":["trace[1466387580] 'process raft request'  (duration: 203.030294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150417Z","caller":"traceutil/trace.go:171","msg":"trace[1484778379] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"202.338487ms","start":"2024-09-19T18:39:46.948072Z","end":"2024-09-19T18:39:47.150411Z","steps":["trace[1484778379] 'process raft request'  (duration: 201.364589ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150492Z","caller":"traceutil/trace.go:171","msg":"trace[1762014815] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:419; }","duration":"204.192549ms","start":"2024-09-19T18:39:46.946292Z","end":"2024-09-19T18:39:47.150485Z","steps":["trace[1762014815] 'read index received'  (duration: 101.644452ms)","trace[1762014815] 'applied index is now lower than readState.Index'  (duration: 102.547441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:39:47.150718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.417513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:47.150742Z","caller":"traceutil/trace.go:171","msg":"trace[30934350] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"204.449131ms","start":"2024-09-19T18:39:46.946286Z","end":"2024-09-19T18:39:47.150735Z","steps":["trace[30934350] 'agreement among raft nodes before linearized reading'  (duration: 204.399184ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:41:08.113307Z","caller":"traceutil/trace.go:171","msg":"trace[1867049731] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"218.87531ms","start":"2024-09-19T18:41:07.893123Z","end":"2024-09-19T18:41:08.111998Z","steps":["trace[1867049731] 'process raft request'  (duration: 146.821964ms)","trace[1867049731] 'compare'  (duration: 71.937946ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:49:35.458285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1609}
	{"level":"info","ts":"2024-09-19T18:49:35.481341Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1609,"took":"22.590141ms","hash":3032817660,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3510272,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-19T18:49:35.481386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3032817660,"revision":1609,"compact-revision":-1}
	{"level":"info","ts":"2024-09-19T18:54:35.463171Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2033}
	{"level":"info","ts":"2024-09-19T18:54:35.479457Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2033,"took":"15.735537ms","hash":3624308866,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":4227072,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-19T18:54:35.479504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3624308866,"revision":2033,"compact-revision":1609}
	
	
	==> gcp-auth [8763c1c636d0e544cec68dd7fd43a6178da8c1609fed0cf08b900e90bcd721ae] <==
	2024/09/19 18:41:56 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:06 Ready to marshal response ...
	2024/09/19 18:50:06 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:51:33 Ready to marshal response ...
	2024/09/19 18:51:33 Ready to write response ...
	2024/09/19 18:51:42 Ready to marshal response ...
	2024/09/19 18:51:42 Ready to write response ...
	
	
	==> kernel <==
	 18:55:38 up  3:38,  0 users,  load average: 0.25, 0.22, 0.44
	Linux addons-685250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] <==
	I0919 18:53:28.351934       1 main.go:299] handling current node
	I0919 18:53:38.351196       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:38.351236       1 main.go:299] handling current node
	I0919 18:53:48.351573       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:48.351613       1 main.go:299] handling current node
	I0919 18:53:58.358258       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:53:58.358299       1 main.go:299] handling current node
	I0919 18:54:08.351385       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:08.351436       1 main.go:299] handling current node
	I0919 18:54:18.353091       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:18.353150       1 main.go:299] handling current node
	I0919 18:54:28.350866       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:28.350907       1 main.go:299] handling current node
	I0919 18:54:38.355399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:38.355443       1 main.go:299] handling current node
	I0919 18:54:48.350983       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:48.351021       1 main.go:299] handling current node
	I0919 18:54:58.351456       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:58.351505       1 main.go:299] handling current node
	I0919 18:55:08.355945       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:08.355985       1 main.go:299] handling current node
	I0919 18:55:18.353447       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:18.353491       1 main.go:299] handling current node
	I0919 18:55:28.353417       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:28.353453       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0919 18:41:46.384826       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.77.71:443: connect: connection refused" logger="UnhandledError"
	I0919 18:41:46.398246       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 18:50:10.564173       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.569821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.575508       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:25.576915       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:30.878332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:31.884590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:32.891043       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:33.897594       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:34.904265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:35.910640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:36.916660       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:37.922615       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:38.928704       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:39.935718       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 18:50:59.939369       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.7.39"}
	I0919 18:51:21.107714       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:51:22.123982       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0919 18:51:39.581185       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.29:41094: read: connection reset by peer
	E0919 18:51:41.443959       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:51:42.224676       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:51:42.394849       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.136.235"}
	
	
	==> kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] <==
	I0919 18:51:28.399705       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0919 18:51:31.445687       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:31.445728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:41.905172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="8.511µs"
	W0919 18:51:42.682508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:42.682559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:43.580046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	I0919 18:51:43.942974       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0919 18:51:43.943019       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:51:44.345759       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0919 18:51:44.345799       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 18:51:52.413869       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0919 18:51:57.889795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:57.889847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:27.558659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:27.558704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:24.382837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:24.382902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:58.320420       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:58.320480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:36.903837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:36.903888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:34.730951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:34.731007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:36.833056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.157µs"
	
	
	==> kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] <==
	I0919 18:39:47.957278       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:39:49.044392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:39:49.044560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:49.357227       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:39:49.357310       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:49.437470       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:49.438149       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:49.438227       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:49.444383       1 config.go:199] "Starting service config controller"
	I0919 18:39:49.444434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:49.444451       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:49.444468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:49.445015       1 config.go:328] "Starting node config controller"
	I0919 18:39:49.445038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:49.544520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:49.544894       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:39:49.545185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] <==
	W0919 18:39:36.759688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 18:39:36.759698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:36.759716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:36.759719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:39:36.759767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0919 18:39:36.759715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.577548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:37.577594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.591157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:37.591194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.662233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:37.662283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:39:37.691889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:39:37.691945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.788039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:37.788093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.902881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:37.902929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.943554       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:37.943606       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:37.964311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:37.964357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:40.957211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:54:59 addons-685250 kubelet[1619]: E0919 18:54:59.647119    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772099646822251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:00 addons-685250 kubelet[1619]: E0919 18:55:00.354586    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.047242    1619 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.047356    1619 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.047487    1619 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8nj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:
,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx_default(ebd6539d-2dc6-46b7-8766-cd26ce5e6547): ErrImagePull: loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.048666    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.355193    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.649319    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772109649129514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:09 addons-685250 kubelet[1619]: E0919 18:55:09.649355    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772109649129514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:15 addons-685250 kubelet[1619]: E0919 18:55:15.354599    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:55:19 addons-685250 kubelet[1619]: E0919 18:55:19.651890    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772119651687146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:19 addons-685250 kubelet[1619]: E0919 18:55:19.651937    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772119651687146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:21 addons-685250 kubelet[1619]: E0919 18:55:21.354868    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:55:21 addons-685250 kubelet[1619]: E0919 18:55:21.354877    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:55:27 addons-685250 kubelet[1619]: E0919 18:55:27.355082    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:55:29 addons-685250 kubelet[1619]: E0919 18:55:29.654265    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772129654047596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:29 addons-685250 kubelet[1619]: E0919 18:55:29.654300    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772129654047596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:32 addons-685250 kubelet[1619]: E0919 18:55:32.355099    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:55:35 addons-685250 kubelet[1619]: E0919 18:55:35.354759    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.088901    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9ph\" (UniqueName: \"kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph\") pod \"0041dcd9-b46b-406b-a78c-728fda2b92cc\" (UID: \"0041dcd9-b46b-406b-a78c-728fda2b92cc\") "
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.088962    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir\") pod \"0041dcd9-b46b-406b-a78c-728fda2b92cc\" (UID: \"0041dcd9-b46b-406b-a78c-728fda2b92cc\") "
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.089372    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0041dcd9-b46b-406b-a78c-728fda2b92cc" (UID: "0041dcd9-b46b-406b-a78c-728fda2b92cc"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.091431    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph" (OuterVolumeSpecName: "kube-api-access-6r9ph") pod "0041dcd9-b46b-406b-a78c-728fda2b92cc" (UID: "0041dcd9-b46b-406b-a78c-728fda2b92cc"). InnerVolumeSpecName "kube-api-access-6r9ph". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.190039    1619 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.190072    1619 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6r9ph\" (UniqueName: \"kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph\") on node \"addons-685250\" DevicePath \"\""
	
	
	==> storage-provisioner [c265d33c64155de4fde21bb6eae221bdd5a2524b7a15aa0b673f23ce4f17b12d] <==
	I0919 18:40:29.640679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:29.648412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:29.648464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:29.655439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:29.655525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3690d0-7216-4b96-a260-4e04cffeb393", APIVersion:"v1", ResourceVersion:"963", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685250_e66922b4-9073-4377-9148-47e4da8ece38 became leader
	I0919 18:40:29.655628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	I0919 18:40:29.756484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1 (88.694901ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:41:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbctc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbctc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-685250
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m30s (x43 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:51:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8nj8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w8nj8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  3m57s                 default-scheduler  Successfully assigned default/nginx to addons-685250
	  Warning  Failed     114s (x2 over 3m26s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    85s (x3 over 3m57s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     30s (x3 over 3m26s)   kubelet            Error: ErrImagePull
	  Warning  Failed     30s                   kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x4 over 3m26s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7s (x4 over 3m26s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:50:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzftq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mzftq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m33s                default-scheduler  Successfully assigned default/task-pv-pod to addons-685250
	  Warning  Failed     4m47s                kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m25s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    98s (x4 over 5m33s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     67s (x4 over 4m47s)  kubelet            Error: ErrImagePull
	  Warning  Failed     67s (x2 over 4m2s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x6 over 4m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    30s (x8 over 4m46s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqqsb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zkk9z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (339.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (368.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.034884ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-685250 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-685250 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [337122f1-f839-443e-89c9-ab116e67ccad] Pending
helpers_test.go:344: "task-pv-pod" [337122f1-f839-443e-89c9-ab116e67ccad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:585: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:585: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
addons_test.go:585: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-09-19 18:56:06.377086047 +0000 UTC m=+1036.280045875
addons_test.go:585: (dbg) Run:  kubectl --context addons-685250 describe po task-pv-pod -n default
addons_test.go:585: (dbg) kubectl --context addons-685250 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-685250/192.168.49.2
Start Time:       Thu, 19 Sep 2024 18:50:06 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.25
IPs:
IP:  10.244.0.25
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzftq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-mzftq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-685250
Warning  Failed     5m14s                kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m52s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    2m5s (x4 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     94s (x4 over 5m14s)  kubelet            Error: ErrImagePull
Warning  Failed     94s (x2 over 4m29s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     83s (x6 over 5m13s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    57s (x8 over 5m13s)  kubelet            Back-off pulling image "docker.io/nginx"
addons_test.go:585: (dbg) Run:  kubectl --context addons-685250 logs task-pv-pod -n default
addons_test.go:585: (dbg) Non-zero exit: kubectl --context addons-685250 logs task-pv-pod -n default: exit status 1 (68.119678ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:585: kubectl --context addons-685250 logs task-pv-pod -n default: exit status 1
addons_test.go:586: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-685250
helpers_test.go:235: (dbg) docker inspect addons-685250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf",
	        "Created": "2024-09-19T18:39:26.544485958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 762128,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T18:39:26.653035442Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hostname",
	        "HostsPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/hosts",
	        "LogPath": "/var/lib/docker/containers/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf/cdadbc576653c86e10c24ff9ab47a4cd6b7ecae7a3e166835365c09bb919cdaf-json.log",
	        "Name": "/addons-685250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-685250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-685250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/719a21df389c5cfbde7422092d8e14fa0af9502fcf666f7bf69a458b574172f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-685250",
	                "Source": "/var/lib/docker/volumes/addons-685250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-685250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-685250",
	                "name.minikube.sigs.k8s.io": "addons-685250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1b0ccece079b2c012374acf46f9c349cae0c8bd9ae1a208e2d0acc049d21c7cb",
	            "SandboxKey": "/var/run/docker/netns/1b0ccece079b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-685250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3c159902c31cb41244d3423728e25a3f29e7e8e24a95c6da692d29e053f66798",
	                    "EndpointID": "51640df6c09057e35d4d5a9f04688e387f2981906971ee1afa85b24730ac60a3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-685250",
	                        "cdadbc576653"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-685250 -n addons-685250
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 logs -n 25: (1.216321823s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-759185                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-845536                                                                     | download-only-845536   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| delete  | -p download-only-759185                                                                     | download-only-759185   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | download-docker-985684                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-985684                                                                   | download-docker-985684 | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | binary-mirror-515604                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:32895                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-515604                                                                     | binary-mirror-515604   | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:39 UTC |
	| addons  | disable dashboard -p                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC |                     |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-685250 --wait=true                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:39 UTC | 19 Sep 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-685250 ssh cat                                                                       | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | /opt/local-path-provisioner/pvc-83c31ed0-fc42-4249-94b0-a7e77464cc71_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:50 UTC | 19 Sep 24 18:50 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-685250 ip                                                                            | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | addons-685250                                                                               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | -p addons-685250                                                                            |                        |         |         |                     |                     |
	| addons  | addons-685250 addons disable                                                                | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:51 UTC | 19 Sep 24 18:51 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-685250 addons                                                                        | addons-685250          | jenkins | v1.34.0 | 19 Sep 24 18:55 UTC | 19 Sep 24 18:55 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:39:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:39:03.200212  761388 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:39:03.200467  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200476  761388 out.go:358] Setting ErrFile to fd 2...
	I0919 18:39:03.200481  761388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:39:03.200718  761388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 18:39:03.201426  761388 out.go:352] Setting JSON to false
	I0919 18:39:03.202398  761388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12093,"bootTime":1726759050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:39:03.202515  761388 start.go:139] virtualization: kvm guest
	I0919 18:39:03.204903  761388 out.go:177] * [addons-685250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:39:03.206237  761388 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 18:39:03.206258  761388 notify.go:220] Checking for updates...
	I0919 18:39:03.208919  761388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:39:03.210261  761388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:03.211535  761388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:39:03.212802  761388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 18:39:03.213964  761388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 18:39:03.215359  761388 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:39:03.237406  761388 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:39:03.237534  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.283495  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.274719559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.283600  761388 docker.go:318] overlay module found
	I0919 18:39:03.286271  761388 out.go:177] * Using the docker driver based on user configuration
	I0919 18:39:03.287521  761388 start.go:297] selected driver: docker
	I0919 18:39:03.287534  761388 start.go:901] validating driver "docker" against <nil>
	I0919 18:39:03.287545  761388 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 18:39:03.288361  761388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:39:03.333412  761388 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-19 18:39:03.324780201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:39:03.333593  761388 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:39:03.333839  761388 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:39:03.335585  761388 out.go:177] * Using Docker driver with root privileges
	I0919 18:39:03.336930  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:03.336986  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:03.336997  761388 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:39:03.337090  761388 start.go:340] cluster config:
	{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:03.338526  761388 out.go:177] * Starting "addons-685250" primary control-plane node in "addons-685250" cluster
	I0919 18:39:03.339809  761388 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:39:03.340995  761388 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:39:03.342026  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:03.342057  761388 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:39:03.342055  761388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:39:03.342063  761388 cache.go:56] Caching tarball of preloaded images
	I0919 18:39:03.342182  761388 preload.go:172] Found /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 18:39:03.342194  761388 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:39:03.342520  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:03.342542  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json: {Name:mk74efcccadcff6ea4a0787d2832be4be3984d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:03.359223  761388 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:39:03.359412  761388 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:39:03.359431  761388 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:39:03.359435  761388 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:39:03.359442  761388 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:39:03.359450  761388 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0919 18:39:14.708408  761388 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0919 18:39:14.708455  761388 cache.go:194] Successfully downloaded all kic artifacts
	I0919 18:39:14.708519  761388 start.go:360] acquireMachinesLock for addons-685250: {Name:mk56c74bc959dec1fb8992b737e0e35c0cd4ad03 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 18:39:14.708642  761388 start.go:364] duration metric: took 84.107µs to acquireMachinesLock for "addons-685250"
	I0919 18:39:14.708671  761388 start.go:93] Provisioning new machine with config: &{Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:14.708780  761388 start.go:125] createHost starting for "" (driver="docker")
	I0919 18:39:14.710766  761388 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0919 18:39:14.711013  761388 start.go:159] libmachine.API.Create for "addons-685250" (driver="docker")
	I0919 18:39:14.711068  761388 client.go:168] LocalClient.Create starting
	I0919 18:39:14.711150  761388 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem
	I0919 18:39:14.824308  761388 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem
	I0919 18:39:15.025789  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 18:39:15.041206  761388 cli_runner.go:211] docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 18:39:15.041292  761388 network_create.go:284] running [docker network inspect addons-685250] to gather additional debugging logs...
	I0919 18:39:15.041313  761388 cli_runner.go:164] Run: docker network inspect addons-685250
	W0919 18:39:15.056441  761388 cli_runner.go:211] docker network inspect addons-685250 returned with exit code 1
	I0919 18:39:15.056478  761388 network_create.go:287] error running [docker network inspect addons-685250]: docker network inspect addons-685250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-685250 not found
	I0919 18:39:15.056490  761388 network_create.go:289] output of [docker network inspect addons-685250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-685250 not found
	
	** /stderr **
	I0919 18:39:15.056606  761388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:15.072776  761388 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001446920}
	I0919 18:39:15.072824  761388 network_create.go:124] attempt to create docker network addons-685250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 18:39:15.072890  761388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-685250 addons-685250
	I0919 18:39:15.132522  761388 network_create.go:108] docker network addons-685250 192.168.49.0/24 created
	I0919 18:39:15.132554  761388 kic.go:121] calculated static IP "192.168.49.2" for the "addons-685250" container
	I0919 18:39:15.132644  761388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 18:39:15.147671  761388 cli_runner.go:164] Run: docker volume create addons-685250 --label name.minikube.sigs.k8s.io=addons-685250 --label created_by.minikube.sigs.k8s.io=true
	I0919 18:39:15.163961  761388 oci.go:103] Successfully created a docker volume addons-685250
	I0919 18:39:15.164048  761388 cli_runner.go:164] Run: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0919 18:39:22.072772  761388 cli_runner.go:217] Completed: docker run --rm --name addons-685250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --entrypoint /usr/bin/test -v addons-685250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (6.908674607s)
	I0919 18:39:22.072803  761388 oci.go:107] Successfully prepared a docker volume addons-685250
	I0919 18:39:22.072836  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:22.072868  761388 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 18:39:22.072944  761388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 18:39:26.483616  761388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-685250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.41062526s)
	I0919 18:39:26.483649  761388 kic.go:203] duration metric: took 4.410778812s to extract preloaded images to volume ...
	W0919 18:39:26.483780  761388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0919 18:39:26.483868  761388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 18:39:26.529192  761388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-685250 --name addons-685250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-685250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-685250 --network addons-685250 --ip 192.168.49.2 --volume addons-685250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0919 18:39:26.802037  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Running}}
	I0919 18:39:26.820911  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:26.839572  761388 cli_runner.go:164] Run: docker exec addons-685250 stat /var/lib/dpkg/alternatives/iptables
	I0919 18:39:26.880131  761388 oci.go:144] the created container "addons-685250" has a running status.
	I0919 18:39:26.880165  761388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa...
	I0919 18:39:27.339670  761388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 18:39:27.361758  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.379045  761388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 18:39:27.379068  761388 kic_runner.go:114] Args: [docker exec --privileged addons-685250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 18:39:27.421090  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:27.437982  761388 machine.go:93] provisionDockerMachine start ...
	I0919 18:39:27.438079  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.456233  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.456524  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.456542  761388 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 18:39:27.594819  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.594862  761388 ubuntu.go:169] provisioning hostname "addons-685250"
	I0919 18:39:27.594952  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.613368  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.613592  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.613622  761388 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-685250 && echo "addons-685250" | sudo tee /etc/hostname
	I0919 18:39:27.754187  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-685250
	
	I0919 18:39:27.754262  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:27.771895  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:27.772132  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:27.772152  761388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-685250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-685250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-685250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 18:39:27.903239  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 18:39:27.903269  761388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19664-753213/.minikube CaCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19664-753213/.minikube}
	I0919 18:39:27.903324  761388 ubuntu.go:177] setting up certificates
	I0919 18:39:27.903341  761388 provision.go:84] configureAuth start
	I0919 18:39:27.903404  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:27.919357  761388 provision.go:143] copyHostCerts
	I0919 18:39:27.919427  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/key.pem (1679 bytes)
	I0919 18:39:27.919543  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/ca.pem (1082 bytes)
	I0919 18:39:27.919618  761388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19664-753213/.minikube/cert.pem (1123 bytes)
	I0919 18:39:27.919681  761388 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem org=jenkins.addons-685250 san=[127.0.0.1 192.168.49.2 addons-685250 localhost minikube]
	I0919 18:39:28.160212  761388 provision.go:177] copyRemoteCerts
	I0919 18:39:28.160283  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 18:39:28.160320  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.177005  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.271718  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0919 18:39:28.293331  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 18:39:28.314500  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 18:39:28.335572  761388 provision.go:87] duration metric: took 432.21249ms to configureAuth
	I0919 18:39:28.335604  761388 ubuntu.go:193] setting minikube options for container-runtime
	I0919 18:39:28.335790  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:28.335896  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.352244  761388 main.go:141] libmachine: Using SSH client type: native
	I0919 18:39:28.352438  761388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33518 <nil> <nil>}
	I0919 18:39:28.352454  761388 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 18:39:28.570762  761388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 18:39:28.570788  761388 machine.go:96] duration metric: took 1.132783666s to provisionDockerMachine
	I0919 18:39:28.570801  761388 client.go:171] duration metric: took 13.859723313s to LocalClient.Create
	I0919 18:39:28.570823  761388 start.go:167] duration metric: took 13.859810827s to libmachine.API.Create "addons-685250"
	I0919 18:39:28.570832  761388 start.go:293] postStartSetup for "addons-685250" (driver="docker")
	I0919 18:39:28.570846  761388 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 18:39:28.570928  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 18:39:28.570969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.587920  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.684315  761388 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 18:39:28.687444  761388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 18:39:28.687482  761388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 18:39:28.687493  761388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 18:39:28.687502  761388 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 18:39:28.687516  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/addons for local assets ...
	I0919 18:39:28.687596  761388 filesync.go:126] Scanning /home/jenkins/minikube-integration/19664-753213/.minikube/files for local assets ...
	I0919 18:39:28.687629  761388 start.go:296] duration metric: took 116.788714ms for postStartSetup
	I0919 18:39:28.687939  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.704801  761388 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/config.json ...
	I0919 18:39:28.705071  761388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 18:39:28.705124  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.721672  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.816217  761388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 18:39:28.820354  761388 start.go:128] duration metric: took 14.111556683s to createHost
	I0919 18:39:28.820377  761388 start.go:83] releasing machines lock for "addons-685250", held for 14.111720986s
	I0919 18:39:28.820433  761388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-685250
	I0919 18:39:28.837043  761388 ssh_runner.go:195] Run: cat /version.json
	I0919 18:39:28.837093  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.837137  761388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 18:39:28.837212  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:28.853306  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:28.853640  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:29.015641  761388 ssh_runner.go:195] Run: systemctl --version
	I0919 18:39:29.019690  761388 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 18:39:29.156274  761388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 18:39:29.160605  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.178821  761388 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 18:39:29.178900  761388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 18:39:29.204313  761388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 18:39:29.204337  761388 start.go:495] detecting cgroup driver to use...
	I0919 18:39:29.204370  761388 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0919 18:39:29.204409  761388 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 18:39:29.218099  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 18:39:29.228094  761388 docker.go:217] disabling cri-docker service (if available) ...
	I0919 18:39:29.228158  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 18:39:29.240433  761388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 18:39:29.253142  761388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 18:39:29.326278  761388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 18:39:29.406802  761388 docker.go:233] disabling docker service ...
	I0919 18:39:29.406859  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 18:39:29.424951  761388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 18:39:29.435168  761388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 18:39:29.514566  761388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 18:39:29.591355  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 18:39:29.601869  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 18:39:29.616535  761388 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0919 18:39:29.616600  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.625293  761388 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0919 18:39:29.625347  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.634150  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.642705  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.651092  761388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 18:39:29.659117  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.667830  761388 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.681755  761388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 18:39:29.690617  761388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 18:39:29.698112  761388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 18:39:29.705724  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:29.785529  761388 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 18:39:29.878210  761388 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 18:39:29.878295  761388 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 18:39:29.881824  761388 start.go:563] Will wait 60s for crictl version
	I0919 18:39:29.881889  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:39:29.884918  761388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 18:39:29.918116  761388 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 18:39:29.918200  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.952309  761388 ssh_runner.go:195] Run: crio --version
	I0919 18:39:29.988286  761388 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0919 18:39:29.989606  761388 cli_runner.go:164] Run: docker network inspect addons-685250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 18:39:30.005833  761388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 18:39:30.009469  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.020164  761388 kubeadm.go:883] updating cluster {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 18:39:30.020281  761388 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:39:30.020325  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.083858  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.083879  761388 crio.go:433] Images already preloaded, skipping extraction
	I0919 18:39:30.083926  761388 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 18:39:30.116167  761388 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 18:39:30.116190  761388 cache_images.go:84] Images are preloaded, skipping loading
	I0919 18:39:30.116199  761388 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0919 18:39:30.116364  761388 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-685250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 18:39:30.116428  761388 ssh_runner.go:195] Run: crio config
	I0919 18:39:30.156650  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:30.156675  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:30.156688  761388 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 18:39:30.156711  761388 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-685250 NodeName:addons-685250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 18:39:30.156845  761388 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-685250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 18:39:30.156908  761388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0919 18:39:30.165387  761388 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 18:39:30.165448  761388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 18:39:30.173207  761388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 18:39:30.188946  761388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 18:39:30.205638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0919 18:39:30.222877  761388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 18:39:30.226085  761388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 18:39:30.236096  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:30.319405  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:30.332104  761388 certs.go:68] Setting up /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250 for IP: 192.168.49.2
	I0919 18:39:30.332125  761388 certs.go:194] generating shared ca certs ...
	I0919 18:39:30.332140  761388 certs.go:226] acquiring lock for ca certs: {Name:mkac4e621bd7a8886df3f6838bd34b99172c371a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.332275  761388 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key
	I0919 18:39:30.528690  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt ...
	I0919 18:39:30.528724  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt: {Name:mked4ee6d8831516d03c840d59935532e3f21cd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.528941  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key ...
	I0919 18:39:30.528958  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key: {Name:mkcb02ba3f86d66b352caba2841d6dd380f76edb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.529067  761388 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key
	I0919 18:39:30.624034  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt ...
	I0919 18:39:30.624068  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt: {Name:mkaa7904f1d229a9140b6f62d1d672cf00a2f2cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624277  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key ...
	I0919 18:39:30.624295  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key: {Name:mkb6bb0d0409e9bd1f254506994f2a2447e5cc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.624398  761388 certs.go:256] generating profile certs ...
	I0919 18:39:30.624464  761388 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key
	I0919 18:39:30.624490  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt with IP's: []
	I0919 18:39:30.752151  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt ...
	I0919 18:39:30.752185  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: {Name:mk69a3ec8793b5371f583f88b2bebacea2af07ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752390  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key ...
	I0919 18:39:30.752406  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.key: {Name:mk7d143fc1d3dd645310e55acf6f951beafc9848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.752506  761388 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966
	I0919 18:39:30.752526  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 18:39:30.915660  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 ...
	I0919 18:39:30.915697  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966: {Name:mkdb41eb017de5d424bda2067b62b8ceafaf07c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.915911  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 ...
	I0919 18:39:30.915931  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966: {Name:mkbc3d5e5a7473c69994a57b2f0a8b8707ffe9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:30.916041  761388 certs.go:381] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt
	I0919 18:39:30.916130  761388 certs.go:385] copying /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key.89e36966 -> /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key
	I0919 18:39:30.916176  761388 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key
	I0919 18:39:30.916195  761388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt with IP's: []
	I0919 18:39:31.094514  761388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt ...
	I0919 18:39:31.094599  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt: {Name:mk9dc2f777ee8d63ffc9f5a10453c45f6382bf93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094776  761388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key ...
	I0919 18:39:31.094791  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key: {Name:mk32678ed11fe18054a48114b5283e466fb989c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:31.094999  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 18:39:31.095055  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/ca.pem (1082 bytes)
	I0919 18:39:31.095092  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/cert.pem (1123 bytes)
	I0919 18:39:31.095124  761388 certs.go:484] found cert: /home/jenkins/minikube-integration/19664-753213/.minikube/certs/key.pem (1679 bytes)
	I0919 18:39:31.095878  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 18:39:31.120600  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 18:39:31.142506  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 18:39:31.164187  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0919 18:39:31.185942  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 18:39:31.207396  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 18:39:31.229449  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 18:39:31.250877  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 18:39:31.272098  761388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 18:39:31.293403  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 18:39:31.308896  761388 ssh_runner.go:195] Run: openssl version
	I0919 18:39:31.314017  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 18:39:31.322554  761388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325634  761388 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 18:39 /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.325693  761388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 18:39:31.331892  761388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 18:39:31.340220  761388 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 18:39:31.343178  761388 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 18:39:31.343230  761388 kubeadm.go:392] StartCluster: {Name:addons-685250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-685250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:39:31.343328  761388 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 18:39:31.343377  761388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 18:39:31.376569  761388 cri.go:89] found id: ""
	I0919 18:39:31.376645  761388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 18:39:31.384955  761388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 18:39:31.393013  761388 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 18:39:31.393065  761388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 18:39:31.400980  761388 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 18:39:31.400998  761388 kubeadm.go:157] found existing configuration files:
	
	I0919 18:39:31.401035  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 18:39:31.408813  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 18:39:31.408861  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 18:39:31.416662  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 18:39:31.424342  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 18:39:31.424386  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 18:39:31.431658  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.438947  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 18:39:31.438996  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 18:39:31.445986  761388 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 18:39:31.453391  761388 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 18:39:31.453444  761388 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 18:39:31.460734  761388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 18:39:31.495835  761388 kubeadm.go:310] W0919 18:39:31.495183    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.496393  761388 kubeadm.go:310] W0919 18:39:31.495823    1289 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0919 18:39:31.513844  761388 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0919 18:39:31.563421  761388 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 18:39:40.033093  761388 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0919 18:39:40.033184  761388 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 18:39:40.033278  761388 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 18:39:40.033324  761388 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0919 18:39:40.033356  761388 kubeadm.go:310] OS: Linux
	I0919 18:39:40.033398  761388 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 18:39:40.033437  761388 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0919 18:39:40.033482  761388 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 18:39:40.033521  761388 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 18:39:40.033566  761388 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 18:39:40.033607  761388 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 18:39:40.033655  761388 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 18:39:40.033699  761388 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 18:39:40.033736  761388 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0919 18:39:40.033793  761388 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 18:39:40.033891  761388 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 18:39:40.034008  761388 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 18:39:40.034100  761388 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 18:39:40.035787  761388 out.go:235]   - Generating certificates and keys ...
	I0919 18:39:40.035950  761388 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 18:39:40.036208  761388 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 18:39:40.036312  761388 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 18:39:40.036391  761388 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 18:39:40.036476  761388 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 18:39:40.036548  761388 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 18:39:40.036641  761388 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 18:39:40.036746  761388 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.036794  761388 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 18:39:40.036940  761388 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-685250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 18:39:40.037024  761388 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 18:39:40.037075  761388 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 18:39:40.037112  761388 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 18:39:40.037161  761388 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 18:39:40.037201  761388 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 18:39:40.037258  761388 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 18:39:40.037338  761388 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 18:39:40.037448  761388 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 18:39:40.037533  761388 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 18:39:40.037626  761388 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 18:39:40.037718  761388 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 18:39:40.039316  761388 out.go:235]   - Booting up control plane ...
	I0919 18:39:40.039415  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 18:39:40.039524  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 18:39:40.039619  761388 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 18:39:40.039728  761388 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 18:39:40.039841  761388 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 18:39:40.039909  761388 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 18:39:40.040093  761388 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 18:39:40.040237  761388 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 18:39:40.040290  761388 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.645723ms
	I0919 18:39:40.040356  761388 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0919 18:39:40.040404  761388 kubeadm.go:310] [api-check] The API server is healthy after 4.502008624s
	I0919 18:39:40.040492  761388 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 18:39:40.040605  761388 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 18:39:40.040687  761388 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 18:39:40.040875  761388 kubeadm.go:310] [mark-control-plane] Marking the node addons-685250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 18:39:40.040960  761388 kubeadm.go:310] [bootstrap-token] Using token: ijm4ly.86nu9uivdcvgfqko
	I0919 18:39:40.042478  761388 out.go:235]   - Configuring RBAC rules ...
	I0919 18:39:40.042563  761388 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 18:39:40.042634  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 18:39:40.042751  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 18:39:40.042898  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 18:39:40.043013  761388 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 18:39:40.043111  761388 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 18:39:40.043261  761388 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 18:39:40.043324  761388 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 18:39:40.043388  761388 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 18:39:40.043398  761388 kubeadm.go:310] 
	I0919 18:39:40.043485  761388 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 18:39:40.043499  761388 kubeadm.go:310] 
	I0919 18:39:40.043591  761388 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 18:39:40.043599  761388 kubeadm.go:310] 
	I0919 18:39:40.043634  761388 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 18:39:40.043719  761388 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 18:39:40.043765  761388 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 18:39:40.043770  761388 kubeadm.go:310] 
	I0919 18:39:40.043812  761388 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 18:39:40.043817  761388 kubeadm.go:310] 
	I0919 18:39:40.043857  761388 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 18:39:40.043862  761388 kubeadm.go:310] 
	I0919 18:39:40.043902  761388 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 18:39:40.043999  761388 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 18:39:40.044089  761388 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 18:39:40.044096  761388 kubeadm.go:310] 
	I0919 18:39:40.044175  761388 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 18:39:40.044258  761388 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 18:39:40.044266  761388 kubeadm.go:310] 
	I0919 18:39:40.044382  761388 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044505  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 \
	I0919 18:39:40.044525  761388 kubeadm.go:310] 	--control-plane 
	I0919 18:39:40.044531  761388 kubeadm.go:310] 
	I0919 18:39:40.044599  761388 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 18:39:40.044606  761388 kubeadm.go:310] 
	I0919 18:39:40.044684  761388 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ijm4ly.86nu9uivdcvgfqko \
	I0919 18:39:40.044851  761388 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0d3b67c6a36b796da7b157a4d4acdf893c00e58f1cfebf42e9b32e5d1fd17179 
	I0919 18:39:40.044867  761388 cni.go:84] Creating CNI manager for ""
	I0919 18:39:40.044876  761388 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:39:40.046449  761388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0919 18:39:40.047787  761388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 18:39:40.051623  761388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0919 18:39:40.051638  761388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 18:39:40.069179  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 18:39:40.264712  761388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 18:39:40.264794  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.264800  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-685250 minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef minikube.k8s.io/name=addons-685250 minikube.k8s.io/primary=true
	I0919 18:39:40.272124  761388 ops.go:34] apiserver oom_adj: -16
	I0919 18:39:40.450150  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:40.950813  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.450429  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:41.950463  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.450542  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:42.950992  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.451199  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:43.950242  761388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 18:39:44.012691  761388 kubeadm.go:1113] duration metric: took 3.747963897s to wait for elevateKubeSystemPrivileges
	I0919 18:39:44.012729  761388 kubeadm.go:394] duration metric: took 12.669506054s to StartCluster
	I0919 18:39:44.012758  761388 settings.go:142] acquiring lock: {Name:mkba96297ae0a710684a3a2a45be357ed7205f56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.012903  761388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:39:44.013318  761388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/kubeconfig: {Name:mk7bd3287a61595c1c20478c3038a77f636ffaa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:39:44.013536  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 18:39:44.013566  761388 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 18:39:44.013636  761388 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 18:39:44.013758  761388 addons.go:69] Setting yakd=true in profile "addons-685250"
	I0919 18:39:44.013778  761388 addons.go:69] Setting helm-tiller=true in profile "addons-685250"
	I0919 18:39:44.013797  761388 addons.go:69] Setting registry=true in profile "addons-685250"
	I0919 18:39:44.013801  761388 addons.go:69] Setting ingress=true in profile "addons-685250"
	I0919 18:39:44.013794  761388 addons.go:69] Setting metrics-server=true in profile "addons-685250"
	I0919 18:39:44.013782  761388 addons.go:234] Setting addon yakd=true in "addons-685250"
	I0919 18:39:44.013816  761388 addons.go:234] Setting addon ingress=true in "addons-685250"
	I0919 18:39:44.013818  761388 addons.go:69] Setting storage-provisioner=true in profile "addons-685250"
	I0919 18:39:44.013824  761388 addons.go:234] Setting addon metrics-server=true in "addons-685250"
	I0919 18:39:44.013824  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013835  761388 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-685250"
	I0919 18:39:44.013850  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013852  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-685250"
	I0919 18:39:44.013855  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013828  761388 addons.go:234] Setting addon storage-provisioner=true in "addons-685250"
	I0919 18:39:44.013859  761388 addons.go:69] Setting ingress-dns=true in profile "addons-685250"
	I0919 18:39:44.013875  761388 addons.go:69] Setting inspektor-gadget=true in profile "addons-685250"
	I0919 18:39:44.013891  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013904  761388 addons.go:69] Setting default-storageclass=true in profile "addons-685250"
	I0919 18:39:44.013905  761388 addons.go:69] Setting gcp-auth=true in profile "addons-685250"
	I0919 18:39:44.013920  761388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-685250"
	I0919 18:39:44.013928  761388 mustload.go:65] Loading cluster: addons-685250
	I0919 18:39:44.013810  761388 addons.go:234] Setting addon helm-tiller=true in "addons-685250"
	I0919 18:39:44.013987  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014106  761388 config.go:182] Loaded profile config "addons-685250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 18:39:44.013760  761388 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-685250"
	I0919 18:39:44.014180  761388 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:44.014213  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014234  761388 addons.go:69] Setting volcano=true in profile "addons-685250"
	I0919 18:39:44.014289  761388 addons.go:234] Setting addon volcano=true in "addons-685250"
	I0919 18:39:44.014321  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.014369  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014420  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014444  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014529  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014668  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014766  761388 addons.go:69] Setting volumesnapshots=true in profile "addons-685250"
	I0919 18:39:44.014784  761388 addons.go:234] Setting addon volumesnapshots=true in "addons-685250"
	I0919 18:39:44.014224  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014811  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.014813  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013790  761388 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-685250"
	I0919 18:39:44.014885  761388 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-685250"
	I0919 18:39:44.014921  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013892  761388 addons.go:234] Setting addon ingress-dns=true in "addons-685250"
	I0919 18:39:44.015381  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.015478  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013782  761388 addons.go:69] Setting cloud-spanner=true in profile "addons-685250"
	I0919 18:39:44.015604  761388 addons.go:234] Setting addon cloud-spanner=true in "addons-685250"
	I0919 18:39:44.015632  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.013894  761388 addons.go:234] Setting addon inspektor-gadget=true in "addons-685250"
	I0919 18:39:44.015698  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.016016  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016089  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.015481  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.016191  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013861  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.017759  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.020298  761388 out.go:177] * Verifying Kubernetes components...
	I0919 18:39:44.015297  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.013811  761388 addons.go:234] Setting addon registry=true in "addons-685250"
	I0919 18:39:44.026436  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.028211  761388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 18:39:44.037105  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.048567  761388 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0919 18:39:44.048657  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.050374  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0919 18:39:44.050397  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0919 18:39:44.050461  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.052343  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0919 18:39:44.060733  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:44.062707  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.062730  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 18:39:44.062789  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.081544  761388 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0919 18:39:44.081631  761388 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0919 18:39:44.083278  761388 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.083339  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 18:39:44.083408  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.086304  761388 out.go:177]   - Using image docker.io/registry:2.8.3
	I0919 18:39:44.086735  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 18:39:44.088743  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 18:39:44.088872  761388 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 18:39:44.091114  761388 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-685250"
	I0919 18:39:44.091164  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.091489  761388 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 18:39:44.091508  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 18:39:44.091564  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.091649  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:44.091952  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.092800  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 18:39:44.092818  761388 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 18:39:44.092889  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.094032  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 18:39:44.101275  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 18:39:44.103871  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 18:39:44.106750  761388 addons.go:234] Setting addon default-storageclass=true in "addons-685250"
	I0919 18:39:44.106804  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:44.107282  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	W0919 18:39:44.109675  761388 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 18:39:44.110326  761388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 18:39:44.110334  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 18:39:44.112386  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.112408  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 18:39:44.112472  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.112565  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 18:39:44.113382  761388 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0919 18:39:44.114898  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 18:39:44.114906  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 18:39:44.114925  761388 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 18:39:44.114984  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.116662  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 18:39:44.116682  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 18:39:44.116748  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.119259  761388 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0919 18:39:44.120516  761388 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0919 18:39:44.120540  761388 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0919 18:39:44.120610  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.123773  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.136078  761388 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0919 18:39:44.138681  761388 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.138709  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0919 18:39:44.138773  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.144207  761388 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0919 18:39:44.145527  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.145578  761388 out.go:177]   - Using image docker.io/busybox:stable
	I0919 18:39:44.146995  761388 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:44.147017  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 18:39:44.147076  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.152809  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.156308  761388 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 18:39:44.157886  761388 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0919 18:39:44.157903  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 18:39:44.157925  761388 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 18:39:44.157985  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.162886  761388 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.162909  761388 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 18:39:44.162966  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.163450  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.166881  761388 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:44.166906  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 18:39:44.166969  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:44.172034  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.180781  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 18:39:44.183673  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.189557  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.190040  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.198542  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.202993  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.203703  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.205321  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.208823  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:44.209666  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	W0919 18:39:44.241755  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241799  761388 retry.go:31] will retry after 368.513545ms: ssh: handshake failed: EOF
	W0919 18:39:44.241901  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.241912  761388 retry.go:31] will retry after 353.358743ms: ssh: handshake failed: EOF
	W0919 18:39:44.241992  761388 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 18:39:44.242019  761388 retry.go:31] will retry after 239.291473ms: ssh: handshake failed: EOF
	I0919 18:39:44.351392  761388 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 18:39:44.437649  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 18:39:44.536099  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 18:39:44.541975  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0919 18:39:44.542004  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0919 18:39:44.544666  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 18:39:44.646013  761388 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 18:39:44.646047  761388 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 18:39:44.743483  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 18:39:44.743812  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 18:39:44.743879  761388 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 18:39:44.839790  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 18:39:44.839821  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 18:39:44.840867  761388 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0919 18:39:44.840892  761388 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0919 18:39:44.844891  761388 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:44.844913  761388 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0919 18:39:44.859724  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 18:39:44.859754  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 18:39:44.945601  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 18:39:44.948297  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 18:39:44.948369  761388 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 18:39:44.953207  761388 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:44.953285  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 18:39:45.049434  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0919 18:39:45.049642  761388 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0919 18:39:45.049698  761388 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0919 18:39:45.055848  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 18:39:45.055950  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 18:39:45.058998  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 18:39:45.059024  761388 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 18:39:45.141944  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 18:39:45.141986  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 18:39:45.156162  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 18:39:45.246810  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 18:39:45.246840  761388 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 18:39:45.256490  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 18:39:45.437813  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 18:39:45.441833  761388 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.441871  761388 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 18:39:45.549176  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 18:39:45.549265  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 18:39:45.637502  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 18:39:45.637591  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 18:39:45.642826  761388 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.2913856s)
	I0919 18:39:45.644038  761388 node_ready.go:35] waiting up to 6m0s for node "addons-685250" to be "Ready" ...
	I0919 18:39:45.644391  761388 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463571637s)
	I0919 18:39:45.644468  761388 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 18:39:45.647199  761388 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.647259  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 18:39:45.737336  761388 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0919 18:39:45.737429  761388 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0919 18:39:45.754802  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 18:39:45.754834  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 18:39:45.836195  761388 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 18:39:45.836236  761388 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 18:39:45.851797  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 18:39:45.936024  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 18:39:45.956936  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 18:39:45.956972  761388 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 18:39:46.159873  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0919 18:39:46.159908  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0919 18:39:46.337448  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 18:39:46.337478  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 18:39:46.356760  761388 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-685250" context rescaled to 1 replicas
	I0919 18:39:46.436892  761388 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 18:39:46.436928  761388 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 18:39:46.537037  761388 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0919 18:39:46.537072  761388 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0919 18:39:46.746236  761388 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:46.746266  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 18:39:46.854918  761388 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 18:39:46.855018  761388 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0919 18:39:46.946936  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 18:39:46.946983  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 18:39:47.236798  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 18:39:47.236841  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 18:39:47.246825  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:47.257114  761388 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.257149  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0919 18:39:47.453170  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0919 18:39:47.542740  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 18:39:47.542772  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 18:39:47.659810  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:47.759785  761388 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:47.759819  761388 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 18:39:47.957548  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 18:39:50.147172  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:50.150873  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.713170158s)
	I0919 18:39:50.150919  761388 addons.go:475] Verifying addon ingress=true in "addons-685250"
	I0919 18:39:50.150938  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.614729552s)
	I0919 18:39:50.151045  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.606300895s)
	I0919 18:39:50.151091  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.407584065s)
	I0919 18:39:50.151204  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.205541455s)
	I0919 18:39:50.151283  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.101743958s)
	I0919 18:39:50.151334  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.995098572s)
	I0919 18:39:50.151399  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.89486624s)
	I0919 18:39:50.151505  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.713655603s)
	I0919 18:39:50.151528  761388 addons.go:475] Verifying addon registry=true in "addons-685250"
	I0919 18:39:50.151594  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.29976078s)
	I0919 18:39:50.151618  761388 addons.go:475] Verifying addon metrics-server=true in "addons-685250"
	I0919 18:39:50.151657  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.215596812s)
	I0919 18:39:50.152907  761388 out.go:177] * Verifying ingress addon...
	I0919 18:39:50.153936  761388 out.go:177] * Verifying registry addon...
	I0919 18:39:50.153951  761388 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-685250 service yakd-dashboard -n yakd-dashboard
	
	I0919 18:39:50.155824  761388 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 18:39:50.157505  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0919 18:39:50.163513  761388 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0919 18:39:50.238665  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:39:50.238695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.238959  761388 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 18:39:50.238987  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.660404  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:50.662046  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:50.877367  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.630488674s)
	W0919 18:39:50.877434  761388 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877461  761388 retry.go:31] will retry after 374.811419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 18:39:50.877563  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.424342572s)
	I0919 18:39:51.159983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.160342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.251656  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.294045721s)
	I0919 18:39:51.251706  761388 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-685250"
	I0919 18:39:51.252726  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 18:39:51.253330  761388 out.go:177] * Verifying csi-hostpath-driver addon...
	I0919 18:39:51.255845  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 18:39:51.260109  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:39:51.260134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:51.299405  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 18:39:51.299470  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.319259  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.435849  761388 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 18:39:51.455177  761388 addons.go:234] Setting addon gcp-auth=true in "addons-685250"
	I0919 18:39:51.455235  761388 host.go:66] Checking if "addons-685250" exists ...
	I0919 18:39:51.455622  761388 cli_runner.go:164] Run: docker container inspect addons-685250 --format={{.State.Status}}
	I0919 18:39:51.473709  761388 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 18:39:51.473768  761388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-685250
	I0919 18:39:51.492852  761388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33518 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/addons-685250/id_rsa Username:docker}
	I0919 18:39:51.660242  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:51.660451  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:51.763672  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.148125  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:52.160486  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.160637  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.260177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:52.659866  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:52.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:52.759357  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.159414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.160699  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.260412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:53.660465  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:53.660995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:53.760079  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.036339  761388 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.783560208s)
	I0919 18:39:54.036401  761388 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.56265651s)
	I0919 18:39:54.037930  761388 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0919 18:39:54.039158  761388 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0919 18:39:54.040281  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 18:39:54.040295  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 18:39:54.060953  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 18:39:54.060982  761388 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 18:39:54.078061  761388 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.078081  761388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 18:39:54.096196  761388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 18:39:54.159825  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.161174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.259118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.649396  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:54.664552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:54.666437  761388 addons.go:475] Verifying addon gcp-auth=true in "addons-685250"
	I0919 18:39:54.666458  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:54.669012  761388 out.go:177] * Verifying gcp-auth addon...
	I0919 18:39:54.671405  761388 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 18:39:54.762155  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:54.762165  761388 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 18:39:54.762193  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.159689  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.161131  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.174401  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.259291  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:55.659983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:55.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:55.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:55.758821  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.159552  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.161022  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.174326  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.259237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:56.660149  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:56.660452  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:56.675011  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:56.759761  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.147230  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:57.160802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.160843  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.174625  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.259483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:57.659641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:57.660974  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:57.674433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:57.759804  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.159364  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.160396  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.175074  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.258973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:58.659663  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:58.659995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:58.674333  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:58.759220  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.159931  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.160111  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.174241  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.259030  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:39:59.647936  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:39:59.660361  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:39:59.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:39:59.674569  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:39:59.759432  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.160240  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.160488  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.174961  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.259892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:00.660179  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:00.660554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:00.675141  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:00.758994  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.160048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.174593  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.259801  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:01.659777  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:01.660892  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:01.674204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:01.759169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.147887  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:02.160172  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.160247  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.174624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.259598  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:02.659674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:02.660694  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:02.674100  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:02.759727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.159593  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.160617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.174020  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.259297  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:03.660462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:03.660957  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:03.674094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:03.759774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.159328  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.160575  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.174927  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.259749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:04.647664  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:04.659478  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:04.661089  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:04.674181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:04.759138  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.160148  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.160420  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.174732  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.259905  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:05.659969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:05.660156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:05.674731  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:05.759280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.160047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.160189  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.174412  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.259142  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:06.660052  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:06.660419  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:06.674781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:06.759973  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.147840  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:07.159737  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.160196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.174616  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.259365  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:07.659184  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:07.660781  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:07.674067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:07.758888  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.160134  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.160271  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.174692  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.259835  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:08.659150  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:08.660428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:08.674754  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:08.759483  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.159321  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.160653  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.175114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.260634  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:09.647196  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:09.659462  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:09.660545  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:09.674993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:09.759810  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.159952  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.161096  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.174611  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.259487  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:10.659118  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:10.660327  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:10.674867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:10.759802  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.159342  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.160885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.173987  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.259734  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:11.647819  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:11.659862  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:11.660211  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:11.674274  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:11.759168  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.160283  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.160439  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.175052  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.260097  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:12.659816  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:12.660819  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:12.674404  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:12.759164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.160264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.160357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.174537  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.259736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:13.660466  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:13.660513  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:13.674991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:13.759495  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.146772  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:14.159525  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.159867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.174094  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.260124  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:14.660152  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:14.660362  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:14.674852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:14.759444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.159996  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.160894  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.174310  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.259417  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:15.659374  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:15.660883  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:15.674695  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:15.759222  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.147487  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:16.159970  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.160975  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.174207  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.258997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:16.660164  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:16.660247  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:16.674461  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:16.759434  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.160167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.160211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:17.658940  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:17.660444  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:17.674638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:17.759422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.159603  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.160463  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.174991  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.258926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:18.647877  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:18.660091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:18.660270  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:18.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:18.759470  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.160102  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.160359  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.174708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.259350  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:19.659690  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:19.660560  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:19.673993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:19.759643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.159760  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.160739  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.174018  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.259759  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:20.659618  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:20.660617  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:20.673972  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:20.759708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.147628  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:21.159869  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.161165  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.174520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.259323  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:21.659211  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:21.660585  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:21.673760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:21.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.159736  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.160153  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.174301  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.259002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:22.659694  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:22.661106  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:22.674760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:22.759413  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.159284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.160467  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.174960  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.259223  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:23.647843  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:23.659948  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:23.659983  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:23.674196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:23.758885  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.159695  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.160775  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.174128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.260104  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:24.660632  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:24.661828  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:24.674068  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:24.759900  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.159730  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.160014  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.174822  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.259570  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:25.659440  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:25.660392  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:25.674818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:25.759718  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.147606  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:26.159628  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.161042  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.174701  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.259645  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:26.661426  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:26.662087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:26.674503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:26.759217  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.159812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.160262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.174635  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.259405  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:27.659575  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:27.660727  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:27.674227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:27.759021  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.147837  761388 node_ready.go:53] node "addons-685250" has status "Ready":"False"
	I0919 18:40:28.160082  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.160114  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.174316  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.259173  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:28.646812  761388 node_ready.go:49] node "addons-685250" has status "Ready":"True"
	I0919 18:40:28.646840  761388 node_ready.go:38] duration metric: took 43.002724586s for node "addons-685250" to be "Ready" ...
	I0919 18:40:28.646862  761388 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:40:28.657370  761388 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:28.665479  761388 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 18:40:28.665601  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:28.666301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:28.673925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:28.761809  761388 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 18:40:28.761844  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.160890  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.161414  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.174200  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.262793  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:29.666949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:29.668214  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:29.673941  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:29.760517  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.160901  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.165455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.238277  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.261435  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.665010  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:30.665243  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:30.740441  761388 pod_ready.go:93] pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.740475  761388 pod_ready.go:82] duration metric: took 2.083070651s for pod "coredns-7c65d6cfc9-xxkrh" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740502  761388 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.740774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:30.749009  761388 pod_ready.go:93] pod "etcd-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.749034  761388 pod_ready.go:82] duration metric: took 8.524276ms for pod "etcd-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.749051  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755475  761388 pod_ready.go:93] pod "kube-apiserver-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.755499  761388 pod_ready.go:82] duration metric: took 6.439358ms for pod "kube-apiserver-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.755513  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837071  761388 pod_ready.go:93] pod "kube-controller-manager-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.837158  761388 pod_ready.go:82] duration metric: took 81.634686ms for pod "kube-controller-manager-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.837180  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.842181  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:30.843110  761388 pod_ready.go:93] pod "kube-proxy-tt5h8" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:30.843130  761388 pod_ready.go:82] duration metric: took 5.940025ms for pod "kube-proxy-tt5h8" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:30.843141  761388 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064216  761388 pod_ready.go:93] pod "kube-scheduler-addons-685250" in "kube-system" namespace has status "Ready":"True"
	I0919 18:40:31.064250  761388 pod_ready.go:82] duration metric: took 221.10192ms for pod "kube-scheduler-addons-685250" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.064264  761388 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:40:31.160309  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.161868  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.175154  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.261445  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:31.661945  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:31.662739  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:31.674262  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:31.764171  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.160964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.161120  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.175453  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.261255  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:32.660913  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:32.661774  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:32.675133  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:32.760592  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.070854  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:33.161051  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.161301  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.175286  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.260865  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:33.660702  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:33.661852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:33.675273  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:33.760668  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.160546  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.161086  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.174285  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.260753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:34.661118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:34.661516  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:34.675418  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:34.760922  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.071857  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:35.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.160768  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.175281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.260345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:35.660487  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:35.661415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:35.674901  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:35.760686  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.160095  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.161029  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.174515  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.260186  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:36.660284  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:36.661541  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:36.674751  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:36.760998  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.160677  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.160812  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.174659  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.260012  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:37.569850  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:37.660726  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:37.661114  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:37.674871  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:37.762472  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.160011  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.161167  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.236912  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.261156  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:38.660760  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:38.661073  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:38.675428  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:38.760681  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.160674  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.161278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.174402  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.259952  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:39.570471  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:39.660746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:39.661314  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:39.675826  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:39.760609  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.160453  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.161002  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.175034  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.261000  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:40.660533  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:40.661321  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:40.674507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:40.760519  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.160473  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.161342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.174400  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.259949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:41.570843  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:41.660891  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:41.661331  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:41.675442  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:41.761658  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.159681  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.161135  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.175056  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.260520  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:42.660591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:42.660622  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:42.675267  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:42.761379  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.160638  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.161031  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.241441  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.261128  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:43.641195  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:43.660811  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:43.660936  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:43.674877  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:43.761319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.160296  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.161343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.174926  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.260471  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:44.660490  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:44.661342  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:44.674851  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:44.760497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.160507  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.160595  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.174852  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.260568  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:45.660293  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:45.660999  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:45.674670  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:45.761087  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.070190  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:46.160550  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.160867  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.174270  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.260149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:46.660826  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:46.661696  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:46.676864  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:46.760955  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.160938  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.161615  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.175003  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.260783  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:47.660110  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:47.663272  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:47.701700  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:47.760283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.159939  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.160947  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.174393  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.261025  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:48.570860  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:48.660740  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:48.661222  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:48.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:48.761763  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.160005  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.160755  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.175182  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.260174  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:49.661013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:49.661304  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:49.675895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:49.777512  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.160946  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.160950  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.174204  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.259800  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:50.660357  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:50.661468  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:50.674771  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:50.760091  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.069537  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:51.160657  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.161375  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.174522  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.260449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:51.660943  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:51.661436  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:51.679949  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:51.760555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.160884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.161969  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.175511  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.260422  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:52.660009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:52.661427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:52.674747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:52.760455  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.069882  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:53.160723  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.160847  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.175048  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.260265  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:53.660742  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:53.660975  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:53.675736  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:53.760427  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.160454  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.160554  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.175527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.261623  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:54.661044  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:54.661280  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:54.674256  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:54.762345  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.161624  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.161856  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.177557  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.260964  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:55.571599  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:55.660145  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:55.661293  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:55.674636  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:55.760666  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.160746  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.161295  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.174304  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.259893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:56.660305  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:56.661330  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:56.674639  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:56.759937  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.161201  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.161367  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.174319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.259921  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:57.660452  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:57.661521  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:57.675492  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:57.760449  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.071078  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:40:58.166319  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.167684  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.174484  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.261744  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:58.739476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:58.740647  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:58.741278  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:58.843925  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.250851  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.348633  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.349162  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.352318  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:40:59.660355  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:40:59.662169  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:40:59.737125  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:40:59.761343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.071258  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:00.161047  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.161410  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.175212  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.261071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:00.661009  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:00.662071  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:00.674963  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:00.761260  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.160995  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.161522  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.174377  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.261177  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:01.660419  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:01.661825  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:01.675387  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:01.760448  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.071634  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:02.160982  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.161497  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.175139  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.262015  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:02.660625  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:02.661137  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:02.676415  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:02.760266  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.160315  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.161430  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.174874  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.260917  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:03.660127  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:03.661283  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:03.760962  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:03.761328  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.160941  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.161529  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.175159  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.260532  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:04.570304  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:04.660567  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:04.661503  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:04.675149  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:04.761527  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.160742  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.161438  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.175035  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.260884  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:05.660133  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:05.661095  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:05.674647  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:05.760505  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.160998  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.161237  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.175185  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.261772  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:06.570424  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:06.660209  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:06.661433  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:06.675129  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:06.761340  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.160439  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.161643  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.175553  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.260491  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:07.661227  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 18:41:07.661700  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:07.674758  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:07.769893  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.160882  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.161229  761388 kapi.go:107] duration metric: took 1m18.003722545s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 18:41:08.174364  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.260993  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:08.570813  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:08.661066  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:08.675397  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:08.761869  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.163441  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.260343  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.261680  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:09.661162  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:09.738749  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:09.761895  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.161848  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.174642  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.261127  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:10.638793  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:10.660408  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:10.737983  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:10.761997  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.160636  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.238753  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.260239  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:11.661077  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:11.675809  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:11.760946  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.160226  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.174555  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.260120  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:12.660888  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:12.675281  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:12.759818  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.070755  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:13.159900  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.175280  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.260711  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:13.674228  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:13.675067  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:13.761264  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.160557  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.174803  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.260591  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:14.660641  761388 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 18:41:14.675045  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:14.761376  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.070790  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:15.161017  761388 kapi.go:107] duration metric: took 1m25.005187502s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 18:41:15.174846  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.261085  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:15.675476  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:15.837474  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.268231  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.268764  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:16.676196  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 18:41:16.760827  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.176212  761388 kapi.go:107] duration metric: took 1m22.504803809s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 18:41:17.177857  761388 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-685250 cluster.
	I0919 18:41:17.179198  761388 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 18:41:17.180644  761388 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 18:41:17.262198  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:17.570361  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:17.760518  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.261747  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:18.761118  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.260370  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:19.570826  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:19.761115  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.260708  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:20.761013  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.260276  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:21.571353  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:21.760456  761388 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 18:41:22.260815  761388 kapi.go:107] duration metric: took 1m31.004968765s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 18:41:22.262816  761388 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, helm-tiller, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0919 18:41:22.264198  761388 addons.go:510] duration metric: took 1m38.250564753s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns helm-tiller cloud-spanner metrics-server yakd storage-provisioner-rancher inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0919 18:41:24.069345  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:26.070338  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:28.571150  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:31.069639  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:33.069801  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:35.069951  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:37.070152  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:39.570142  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:41.570373  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:44.069797  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.070575  761388 pod_ready.go:103] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"False"
	I0919 18:41:46.570352  761388 pod_ready.go:93] pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.570378  761388 pod_ready.go:82] duration metric: took 1m15.506104425s for pod "metrics-server-84c5f94fbc-gpv2k" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.570389  761388 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574639  761388 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace has status "Ready":"True"
	I0919 18:41:46.574659  761388 pod_ready.go:82] duration metric: took 4.26409ms for pod "nvidia-device-plugin-daemonset-lnffq" in "kube-system" namespace to be "Ready" ...
	I0919 18:41:46.574677  761388 pod_ready.go:39] duration metric: took 1m17.927800889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0919 18:41:46.574695  761388 api_server.go:52] waiting for apiserver process to appear ...
	I0919 18:41:46.574727  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:46.574775  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:46.610505  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:46.610525  761388 cri.go:89] found id: ""
	I0919 18:41:46.610532  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:46.610585  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.614097  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:46.614166  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:46.647964  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:46.647984  761388 cri.go:89] found id: ""
	I0919 18:41:46.647992  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:46.648034  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.651737  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:46.651827  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:46.685728  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:46.685751  761388 cri.go:89] found id: ""
	I0919 18:41:46.685761  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:46.685842  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.689509  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:46.689602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:46.723120  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:46.723148  761388 cri.go:89] found id: ""
	I0919 18:41:46.723159  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:46.723206  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.726505  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:46.726561  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:46.764041  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.764067  761388 cri.go:89] found id: ""
	I0919 18:41:46.764076  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:46.764139  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.767386  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:46.767456  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:46.801334  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:46.801362  761388 cri.go:89] found id: ""
	I0919 18:41:46.801373  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:46.801437  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.804747  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:46.804810  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:46.838269  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:46.838289  761388 cri.go:89] found id: ""
	I0919 18:41:46.838297  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:46.838353  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:46.841583  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:46.841608  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:46.939796  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:46.939825  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:46.973962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:46.973996  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:47.040527  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:47.040563  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:47.079512  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:47.079548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:47.156835  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:47.156873  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:47.244389  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:47.244425  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:47.291698  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:47.291734  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:47.339857  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:47.339892  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:47.378377  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:47.378414  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:47.419595  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:47.419631  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:47.461066  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:47.461101  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:49.991902  761388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 18:41:50.006246  761388 api_server.go:72] duration metric: took 2m5.992641544s to wait for apiserver process to appear ...
	I0919 18:41:50.006277  761388 api_server.go:88] waiting for apiserver healthz status ...
	I0919 18:41:50.006316  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:50.006369  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:50.040275  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.040319  761388 cri.go:89] found id: ""
	I0919 18:41:50.040329  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:50.040373  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.043705  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:50.043766  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:50.078798  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.078819  761388 cri.go:89] found id: ""
	I0919 18:41:50.078826  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:50.078884  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.082274  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:50.082341  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:50.116003  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.116024  761388 cri.go:89] found id: ""
	I0919 18:41:50.116032  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:50.116082  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.119438  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:50.119496  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:50.153370  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.153390  761388 cri.go:89] found id: ""
	I0919 18:41:50.153398  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:50.153451  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.156934  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:50.156999  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:50.191346  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.191372  761388 cri.go:89] found id: ""
	I0919 18:41:50.191381  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:50.191442  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.195442  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:50.195523  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:50.230094  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.230116  761388 cri.go:89] found id: ""
	I0919 18:41:50.230126  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:50.230173  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.233591  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:50.233648  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:50.267946  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.267968  761388 cri.go:89] found id: ""
	I0919 18:41:50.267976  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:50.268020  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:50.271492  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:50.271521  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:50.315171  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:50.315204  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:50.350242  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:50.350276  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:50.406986  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:50.407024  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:50.443914  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:50.443950  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:50.522117  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:50.522161  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:50.603999  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:50.604036  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:50.633867  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:50.633909  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:50.735662  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:50.735694  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:50.778766  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:50.778800  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:50.822323  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:50.822362  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:50.858212  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:50.858244  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.402426  761388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 18:41:53.406334  761388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 18:41:53.407293  761388 api_server.go:141] control plane version: v1.31.1
	I0919 18:41:53.407337  761388 api_server.go:131] duration metric: took 3.401052443s to wait for apiserver health ...
	I0919 18:41:53.407348  761388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 18:41:53.407372  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 18:41:53.407424  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 18:41:53.442342  761388 cri.go:89] found id: "d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:53.442368  761388 cri.go:89] found id: ""
	I0919 18:41:53.442378  761388 logs.go:276] 1 containers: [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf]
	I0919 18:41:53.442443  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.445843  761388 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 18:41:53.445911  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 18:41:53.479392  761388 cri.go:89] found id: "daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:53.479417  761388 cri.go:89] found id: ""
	I0919 18:41:53.479427  761388 logs.go:276] 1 containers: [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf]
	I0919 18:41:53.479483  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.482761  761388 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 18:41:53.482821  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 18:41:53.517132  761388 cri.go:89] found id: "61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.517157  761388 cri.go:89] found id: ""
	I0919 18:41:53.517169  761388 logs.go:276] 1 containers: [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a]
	I0919 18:41:53.517224  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.520542  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 18:41:53.520602  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 18:41:53.554085  761388 cri.go:89] found id: "a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.554107  761388 cri.go:89] found id: ""
	I0919 18:41:53.554116  761388 logs.go:276] 1 containers: [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae]
	I0919 18:41:53.554174  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.557699  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 18:41:53.557779  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 18:41:53.591682  761388 cri.go:89] found id: "1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:53.591703  761388 cri.go:89] found id: ""
	I0919 18:41:53.591711  761388 logs.go:276] 1 containers: [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d]
	I0919 18:41:53.591755  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.595094  761388 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 18:41:53.595172  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 18:41:53.630170  761388 cri.go:89] found id: "4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.630192  761388 cri.go:89] found id: ""
	I0919 18:41:53.630199  761388 logs.go:276] 1 containers: [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148]
	I0919 18:41:53.630257  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.633583  761388 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 18:41:53.633636  761388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 18:41:53.667431  761388 cri.go:89] found id: "28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.667451  761388 cri.go:89] found id: ""
	I0919 18:41:53.667459  761388 logs.go:276] 1 containers: [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea]
	I0919 18:41:53.667505  761388 ssh_runner.go:195] Run: which crictl
	I0919 18:41:53.670883  761388 logs.go:123] Gathering logs for coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] ...
	I0919 18:41:53.670906  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a"
	I0919 18:41:53.707961  761388 logs.go:123] Gathering logs for kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] ...
	I0919 18:41:53.707993  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae"
	I0919 18:41:53.749962  761388 logs.go:123] Gathering logs for kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] ...
	I0919 18:41:53.749997  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148"
	I0919 18:41:53.808507  761388 logs.go:123] Gathering logs for kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] ...
	I0919 18:41:53.808548  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea"
	I0919 18:41:53.843831  761388 logs.go:123] Gathering logs for container status ...
	I0919 18:41:53.843860  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 18:41:53.886934  761388 logs.go:123] Gathering logs for kubelet ...
	I0919 18:41:53.886962  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 18:41:53.965269  761388 logs.go:123] Gathering logs for dmesg ...
	I0919 18:41:53.965305  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 18:41:54.000130  761388 logs.go:123] Gathering logs for describe nodes ...
	I0919 18:41:54.000165  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 18:41:54.102256  761388 logs.go:123] Gathering logs for CRI-O ...
	I0919 18:41:54.102283  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 18:41:54.180041  761388 logs.go:123] Gathering logs for kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] ...
	I0919 18:41:54.180082  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf"
	I0919 18:41:54.225323  761388 logs.go:123] Gathering logs for etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] ...
	I0919 18:41:54.225355  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf"
	I0919 18:41:54.270873  761388 logs.go:123] Gathering logs for kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] ...
	I0919 18:41:54.270914  761388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d"
	I0919 18:41:56.816722  761388 system_pods.go:59] 19 kube-system pods found
	I0919 18:41:56.816754  761388 system_pods.go:61] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.816759  761388 system_pods.go:61] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.816763  761388 system_pods.go:61] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.816767  761388 system_pods.go:61] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.816770  761388 system_pods.go:61] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.816773  761388 system_pods.go:61] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.816777  761388 system_pods.go:61] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.816780  761388 system_pods.go:61] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.816783  761388 system_pods.go:61] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.816787  761388 system_pods.go:61] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.816791  761388 system_pods.go:61] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.816796  761388 system_pods.go:61] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.816800  761388 system_pods.go:61] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.816805  761388 system_pods.go:61] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.816814  761388 system_pods.go:61] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.816821  761388 system_pods.go:61] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.816825  761388 system_pods.go:61] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.816831  761388 system_pods.go:61] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.816836  761388 system_pods.go:61] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.816844  761388 system_pods.go:74] duration metric: took 3.409487976s to wait for pod list to return data ...
	I0919 18:41:56.816856  761388 default_sa.go:34] waiting for default service account to be created ...
	I0919 18:41:56.819044  761388 default_sa.go:45] found service account: "default"
	I0919 18:41:56.819064  761388 default_sa.go:55] duration metric: took 2.201823ms for default service account to be created ...
	I0919 18:41:56.819072  761388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 18:41:56.827195  761388 system_pods.go:86] 19 kube-system pods found
	I0919 18:41:56.827219  761388 system_pods.go:89] "coredns-7c65d6cfc9-xxkrh" [a7aaff41-f43e-4f04-b483-640f84c09e46] Running
	I0919 18:41:56.827224  761388 system_pods.go:89] "csi-hostpath-attacher-0" [baa243bf-40a7-484e-8c01-0899f41d8354] Running
	I0919 18:41:56.827229  761388 system_pods.go:89] "csi-hostpath-resizer-0" [3c4594f5-9d7b-4793-a0c8-7c6105b7d474] Running
	I0919 18:41:56.827232  761388 system_pods.go:89] "csi-hostpathplugin-wvvls" [354c11da-ee7f-4cda-9e0d-9814a4c5ece1] Running
	I0919 18:41:56.827236  761388 system_pods.go:89] "etcd-addons-685250" [cdb92c06-962c-4149-b7f6-bb5fe8331afd] Running
	I0919 18:41:56.827239  761388 system_pods.go:89] "kindnet-nr24c" [8747e20c-57fd-4ffe-9f87-ddda89de3e7b] Running
	I0919 18:41:56.827243  761388 system_pods.go:89] "kube-apiserver-addons-685250" [593c1822-def4-4967-babb-da46832c2f3b] Running
	I0919 18:41:56.827246  761388 system_pods.go:89] "kube-controller-manager-addons-685250" [241a64c3-08de-424a-8a6f-aaad07ae351f] Running
	I0919 18:41:56.827250  761388 system_pods.go:89] "kube-ingress-dns-minikube" [4d2c1d92-69aa-4dcd-be37-639b9fd4ab3d] Running
	I0919 18:41:56.827254  761388 system_pods.go:89] "kube-proxy-tt5h8" [693e7420-8268-43db-82ab-191606a57636] Running
	I0919 18:41:56.827258  761388 system_pods.go:89] "kube-scheduler-addons-685250" [57e53de0-08d3-4b04-822c-361178eb9bdf] Running
	I0919 18:41:56.827261  761388 system_pods.go:89] "metrics-server-84c5f94fbc-gpv2k" [0041dcd9-b46b-406b-a78c-728fda2b92cc] Running
	I0919 18:41:56.827264  761388 system_pods.go:89] "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
	I0919 18:41:56.827267  761388 system_pods.go:89] "registry-66c9cd494c-tsz4w" [bdd1e643-0c83-4fed-a147-0dd79f789e29] Running
	I0919 18:41:56.827270  761388 system_pods.go:89] "registry-proxy-rgdgh" [fc0b3544-d729-4e33-a260-ef1ab277d08f] Running
	I0919 18:41:56.827273  761388 system_pods.go:89] "snapshot-controller-56fcc65765-hpwtx" [119e2c3a-894e-4b8d-b275-06125bb32c87] Running
	I0919 18:41:56.827276  761388 system_pods.go:89] "snapshot-controller-56fcc65765-qsngh" [8eba870c-9765-4259-b19c-945987c52d6e] Running
	I0919 18:41:56.827279  761388 system_pods.go:89] "storage-provisioner" [ddbf1396-7100-4a51-a1b7-b6896cabc0f4] Running
	I0919 18:41:56.827282  761388 system_pods.go:89] "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
	I0919 18:41:56.827287  761388 system_pods.go:126] duration metric: took 8.210478ms to wait for k8s-apps to be running ...
	I0919 18:41:56.827294  761388 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 18:41:56.827364  761388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 18:41:56.838722  761388 system_svc.go:56] duration metric: took 11.419899ms WaitForService to wait for kubelet
	I0919 18:41:56.838749  761388 kubeadm.go:582] duration metric: took 2m12.825152378s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 18:41:56.838775  761388 node_conditions.go:102] verifying NodePressure condition ...
	I0919 18:41:56.841799  761388 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 18:41:56.841823  761388 node_conditions.go:123] node cpu capacity is 8
	I0919 18:41:56.841837  761388 node_conditions.go:105] duration metric: took 3.056374ms to run NodePressure ...
	I0919 18:41:56.841850  761388 start.go:241] waiting for startup goroutines ...
	I0919 18:41:56.841857  761388 start.go:246] waiting for cluster config update ...
	I0919 18:41:56.841872  761388 start.go:255] writing updated cluster config ...
	I0919 18:41:56.842127  761388 ssh_runner.go:195] Run: rm -f paused
	I0919 18:41:56.891468  761388 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0919 18:41:56.894630  761388 out.go:177] * Done! kubectl is now configured to use "addons-685250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 18:55:37 addons-685250 crio[1028]: time="2024-09-19 18:55:37.987342046Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-gpv2k Namespace:kube-system ID:4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445 UID:0041dcd9-b46b-406b-a78c-728fda2b92cc NetNS:/var/run/netns/6a6f0065-d64f-47ce-a95c-b9dc2d7b1749 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 18:55:37 addons-685250 crio[1028]: time="2024-09-19 18:55:37.987485657Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-gpv2k from CNI network \"kindnet\" (type=ptp)"
	Sep 19 18:55:38 addons-685250 crio[1028]: time="2024-09-19 18:55:38.021008483Z" level=info msg="Stopped pod sandbox: 4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445" id=23a80ae1-f944-4300-b10f-33686d8bcfe4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:55:38 addons-685250 crio[1028]: time="2024-09-19 18:55:38.943047126Z" level=info msg="Removing container: 3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47" id=85cdfb93-5679-487a-9281-daee62edd5db name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:55:38 addons-685250 crio[1028]: time="2024-09-19 18:55:38.958572164Z" level=info msg="Removed container 3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47: kube-system/metrics-server-84c5f94fbc-gpv2k/metrics-server" id=85cdfb93-5679-487a-9281-daee62edd5db name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.354256359Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6b69eba9-b120-4560-8902-9de72c57daef name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.354585568Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6b69eba9-b120-4560-8902-9de72c57daef name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.587836628Z" level=info msg="Stopping pod sandbox: 4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445" id=25f6639a-48b7-4361-8dec-120c0595f02a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.587884132Z" level=info msg="Stopped pod sandbox (already stopped): 4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445" id=25f6639a-48b7-4361-8dec-120c0595f02a name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.588173329Z" level=info msg="Removing pod sandbox: 4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445" id=b3249a12-0bf8-460a-8669-d7d8c778d27c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:55:39 addons-685250 crio[1028]: time="2024-09-19 18:55:39.595533900Z" level=info msg="Removed pod sandbox: 4dc38a01fe9458400c20f713325d8ab5aebde729d4334a5d8f0ab2691a41b445" id=b3249a12-0bf8-460a-8669-d7d8c778d27c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 18:55:46 addons-685250 crio[1028]: time="2024-09-19 18:55:46.353937272Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=99ece862-bb51-469d-97ae-ffa14bb43bf0 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:46 addons-685250 crio[1028]: time="2024-09-19 18:55:46.353946158Z" level=info msg="Checking image status: docker.io/nginx:latest" id=cae19829-afc0-4312-b0d3-4b73eb1bd1c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:46 addons-685250 crio[1028]: time="2024-09-19 18:55:46.354321337Z" level=info msg="Image docker.io/nginx:alpine not found" id=99ece862-bb51-469d-97ae-ffa14bb43bf0 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:46 addons-685250 crio[1028]: time="2024-09-19 18:55:46.354376941Z" level=info msg="Image docker.io/nginx:latest not found" id=cae19829-afc0-4312-b0d3-4b73eb1bd1c7 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:51 addons-685250 crio[1028]: time="2024-09-19 18:55:51.354278856Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a55a9113-cce5-47e6-b388-4470a41c63a2 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:55:51 addons-685250 crio[1028]: time="2024-09-19 18:55:51.354577300Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a55a9113-cce5-47e6-b388-4470a41c63a2 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:00 addons-685250 crio[1028]: time="2024-09-19 18:56:00.353586086Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=7cdef5b6-283c-42cd-ae1c-ab87d964af29 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:00 addons-685250 crio[1028]: time="2024-09-19 18:56:00.353814679Z" level=info msg="Image docker.io/nginx:alpine not found" id=7cdef5b6-283c-42cd-ae1c-ab87d964af29 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:00 addons-685250 crio[1028]: time="2024-09-19 18:56:00.354690236Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=fb57009a-9a47-4627-a0cf-0c670f9cb291 name=/runtime.v1.ImageService/PullImage
	Sep 19 18:56:00 addons-685250 crio[1028]: time="2024-09-19 18:56:00.376874804Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 19 18:56:01 addons-685250 crio[1028]: time="2024-09-19 18:56:01.353551117Z" level=info msg="Checking image status: docker.io/nginx:latest" id=af3fd5df-3309-4fe4-adcc-43d3ead547d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:01 addons-685250 crio[1028]: time="2024-09-19 18:56:01.353802621Z" level=info msg="Image docker.io/nginx:latest not found" id=af3fd5df-3309-4fe4-adcc-43d3ead547d1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:02 addons-685250 crio[1028]: time="2024-09-19 18:56:02.353928294Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8aa1653-92e0-4157-9d0d-373812a60d62 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 18:56:02 addons-685250 crio[1028]: time="2024-09-19 18:56:02.354216858Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b8aa1653-92e0-4157-9d0d-373812a60d62 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	9631f3dbcf504       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          14 minutes ago      Running             csi-snapshotter                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	96030830b51d1       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 minutes ago      Running             csi-provisioner                          0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	32bc4d23668fc       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 minutes ago      Running             liveness-probe                           0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	0cc2312cf82a4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 minutes ago      Running             hostpath                                 0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	8763c1c636d0e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 14 minutes ago      Running             gcp-auth                                 0                   c4905e6f06668       gcp-auth-89d5ffd79-5xmj7
	6ec44220259bc       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6                             14 minutes ago      Running             controller                               0                   7eeed172b87cd       ingress-nginx-controller-bc57996ff-jwqfz
	533fe244bc19f       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                14 minutes ago      Running             node-driver-registrar                    0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	781e8a586344e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              15 minutes ago      Running             csi-resizer                              0                   79d20db0c7bd8       csi-hostpath-resizer-0
	135118d48b8e5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   15 minutes ago      Exited              patch                                    0                   b5047ec8d653b       ingress-nginx-admission-patch-zkk9z
	6148ff93b7e21       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   2c111431a9537       snapshot-controller-56fcc65765-hpwtx
	776cccb0a5bb1       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   15 minutes ago      Running             csi-external-health-monitor-controller   0                   9d633bbb3f6dc       csi-hostpathplugin-wvvls
	ae42c7830ff31       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      15 minutes ago      Running             volume-snapshot-controller               0                   a67d1128cd369       snapshot-controller-56fcc65765-qsngh
	3bae675b3b545       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012                   15 minutes ago      Exited              create                                   0                   00fa51ee04653       ingress-nginx-admission-create-rqqsb
	cd361280e82f5       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             15 minutes ago      Running             csi-attacher                             0                   995144454e795       csi-hostpath-attacher-0
	71455e9d9d7f9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             15 minutes ago      Running             minikube-ingress-dns                     0                   1b3ebc5c0bddd       kube-ingress-dns-minikube
	c265d33c64155       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             15 minutes ago      Running             storage-provisioner                      0                   f0b8765d93237       storage-provisioner
	61dc325585534       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             15 minutes ago      Running             coredns                                  0                   70191f5a80edd       coredns-7c65d6cfc9-xxkrh
	28c707c30998a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                                             16 minutes ago      Running             kindnet-cni                              0                   d0d4a24bd5f33       kindnet-nr24c
	1577029617c13       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                                             16 minutes ago      Running             kube-proxy                               0                   006fe668e3bca       kube-proxy-tt5h8
	a9c5d6500618f       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                                             16 minutes ago      Running             kube-scheduler                           0                   6a497d68d67db       kube-scheduler-addons-685250
	4b38bddc95b37       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                                             16 minutes ago      Running             kube-controller-manager                  0                   8dc935b2a1118       kube-controller-manager-addons-685250
	daa04e6dadb8c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             16 minutes ago      Running             etcd                                     0                   49d2cd4b861cb       etcd-addons-685250
	d48e736f52b35       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                                             16 minutes ago      Running             kube-apiserver                           0                   ee84a44e45fe4       kube-apiserver-addons-685250
	
	
	==> coredns [61dc325585534f9ca5eb5fa00ef69f2c7dbea58f10e758226f8469c2c166db5a] <==
	[INFO] 10.244.0.18:34436 - 35698 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108309s
	[INFO] 10.244.0.18:53834 - 64751 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039533s
	[INFO] 10.244.0.18:53834 - 26861 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063287s
	[INFO] 10.244.0.18:40724 - 19030 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005948549s
	[INFO] 10.244.0.18:40724 - 2384 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.00624164s
	[INFO] 10.244.0.18:55178 - 49717 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004779846s
	[INFO] 10.244.0.18:55178 - 43576 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.008989283s
	[INFO] 10.244.0.18:35236 - 29185 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005503532s
	[INFO] 10.244.0.18:35236 - 29053 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006569969s
	[INFO] 10.244.0.18:58901 - 23064 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007067s
	[INFO] 10.244.0.18:58901 - 45339 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090322s
	[INFO] 10.244.0.21:52948 - 4177 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000227224s
	[INFO] 10.244.0.21:45787 - 22571 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317788s
	[INFO] 10.244.0.21:59704 - 52899 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152904s
	[INFO] 10.244.0.21:50018 - 4022 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000239218s
	[INFO] 10.244.0.21:53553 - 39101 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141888s
	[INFO] 10.244.0.21:37741 - 20732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000217668s
	[INFO] 10.244.0.21:55394 - 50618 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005906983s
	[INFO] 10.244.0.21:37603 - 64460 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00595091s
	[INFO] 10.244.0.21:43538 - 27403 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006051611s
	[INFO] 10.244.0.21:54216 - 9854 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00637344s
	[INFO] 10.244.0.21:36139 - 65099 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007481578s
	[INFO] 10.244.0.21:49105 - 14009 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.010893085s
	[INFO] 10.244.0.21:52556 - 17077 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000849386s
	[INFO] 10.244.0.21:56780 - 3812 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000933647s
	
	
	==> describe nodes <==
	Name:               addons-685250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-685250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=addons-685250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T18_39_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-685250
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-685250"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 18:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-685250
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 18:55:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:39:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 18:51:43 +0000   Thu, 19 Sep 2024 18:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-685250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 59964951ae744ca891a1d33d48395cb6
	  System UUID:                ca4c5e3c-dd72-4ffd-b420-cdf7d87c497b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  gcp-auth                    gcp-auth-89d5ffd79-5xmj7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-jwqfz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-xxkrh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpathplugin-wvvls                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 etcd-addons-685250                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-nr24c                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-685250                250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-685250       200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-tt5h8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-685250                100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-hpwtx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-qsngh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 16m                kube-proxy       
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m                kubelet          Node addons-685250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                kubelet          Node addons-685250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                kubelet          Node addons-685250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                node-controller  Node addons-685250 event: Registered Node addons-685250 in Controller
	  Normal   NodeReady                15m                kubelet          Node addons-685250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 9c 9b da 37 02 42 c0 a8 55 02 08 00
	[ +49.810034] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000002] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [daa04e6dadb8c32c9bf9a76c2e1d6e56f31b6380f42e6dc011476b2d6f972acf] <==
	{"level":"info","ts":"2024-09-19T18:39:45.855653Z","caller":"traceutil/trace.go:171","msg":"trace[11607049] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:404; }","duration":"105.61545ms","start":"2024-09-19T18:39:45.750016Z","end":"2024-09-19T18:39:45.855632Z","steps":["trace[11607049] 'read index received'  (duration: 86.226896ms)","trace[11607049] 'applied index is now lower than readState.Index'  (duration: 19.387979ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:45.855963Z","caller":"traceutil/trace.go:171","msg":"trace[722294032] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"106.750007ms","start":"2024-09-19T18:39:45.749192Z","end":"2024-09-19T18:39:45.855942Z","steps":["trace[722294032] 'process raft request'  (duration: 100.852428ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.988653ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4034"}
	{"level":"info","ts":"2024-09-19T18:39:45.856224Z","caller":"traceutil/trace.go:171","msg":"trace[83912261] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"202.035355ms","start":"2024-09-19T18:39:45.654180Z","end":"2024-09-19T18:39:45.856215Z","steps":["trace[83912261] 'agreement among raft nodes before linearized reading'  (duration: 201.947574ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.947549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856402Z","caller":"traceutil/trace.go:171","msg":"trace[297556485] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:395; }","duration":"206.977474ms","start":"2024-09-19T18:39:45.649415Z","end":"2024-09-19T18:39:45.856393Z","steps":["trace[297556485] 'agreement among raft nodes before linearized reading'  (duration: 206.93087ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:45.856532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.416757ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:45.856554Z","caller":"traceutil/trace.go:171","msg":"trace[47804488] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:395; }","duration":"103.442648ms","start":"2024-09-19T18:39:45.753105Z","end":"2024-09-19T18:39:45.856548Z","steps":["trace[47804488] 'agreement among raft nodes before linearized reading'  (duration: 103.402348ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.450928Z","caller":"traceutil/trace.go:171","msg":"trace[447015363] transaction","detail":"{read_only:false; response_revision:402; number_of_response:1; }","duration":"192.15555ms","start":"2024-09-19T18:39:46.258754Z","end":"2024-09-19T18:39:46.450910Z","steps":["trace[447015363] 'process raft request'  (duration: 192.041293ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:46.457451Z","caller":"traceutil/trace.go:171","msg":"trace[199583041] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"102.841342ms","start":"2024-09-19T18:39:46.354595Z","end":"2024-09-19T18:39:46.457437Z","steps":["trace[199583041] 'process raft request'  (duration: 102.766841ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-19T18:39:47.149186Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.608135ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032005940909206 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" mod_revision:386 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" value_size:3943 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-h29wt\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-19T18:39:47.149875Z","caller":"traceutil/trace.go:171","msg":"trace[786871471] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"212.562991ms","start":"2024-09-19T18:39:46.937292Z","end":"2024-09-19T18:39:47.149855Z","steps":["trace[786871471] 'process raft request'  (duration: 110.633244ms)","trace[786871471] 'compare'  (duration: 100.378906ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:39:47.150124Z","caller":"traceutil/trace.go:171","msg":"trace[713102619] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"212.118368ms","start":"2024-09-19T18:39:46.937993Z","end":"2024-09-19T18:39:47.150111Z","steps":["trace[713102619] 'process raft request'  (duration: 211.29202ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150315Z","caller":"traceutil/trace.go:171","msg":"trace[1466387580] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"203.943604ms","start":"2024-09-19T18:39:46.946361Z","end":"2024-09-19T18:39:47.150305Z","steps":["trace[1466387580] 'process raft request'  (duration: 203.030294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150417Z","caller":"traceutil/trace.go:171","msg":"trace[1484778379] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"202.338487ms","start":"2024-09-19T18:39:46.948072Z","end":"2024-09-19T18:39:47.150411Z","steps":["trace[1484778379] 'process raft request'  (duration: 201.364589ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:39:47.150492Z","caller":"traceutil/trace.go:171","msg":"trace[1762014815] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:419; }","duration":"204.192549ms","start":"2024-09-19T18:39:46.946292Z","end":"2024-09-19T18:39:47.150485Z","steps":["trace[1762014815] 'read index received'  (duration: 101.644452ms)","trace[1762014815] 'applied index is now lower than readState.Index'  (duration: 102.547441ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-19T18:39:47.150718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.417513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-19T18:39:47.150742Z","caller":"traceutil/trace.go:171","msg":"trace[30934350] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:413; }","duration":"204.449131ms","start":"2024-09-19T18:39:46.946286Z","end":"2024-09-19T18:39:47.150735Z","steps":["trace[30934350] 'agreement among raft nodes before linearized reading'  (duration: 204.399184ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-19T18:41:08.113307Z","caller":"traceutil/trace.go:171","msg":"trace[1867049731] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"218.87531ms","start":"2024-09-19T18:41:07.893123Z","end":"2024-09-19T18:41:08.111998Z","steps":["trace[1867049731] 'process raft request'  (duration: 146.821964ms)","trace[1867049731] 'compare'  (duration: 71.937946ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-19T18:49:35.458285Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1609}
	{"level":"info","ts":"2024-09-19T18:49:35.481341Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1609,"took":"22.590141ms","hash":3032817660,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":3510272,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-09-19T18:49:35.481386Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3032817660,"revision":1609,"compact-revision":-1}
	{"level":"info","ts":"2024-09-19T18:54:35.463171Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2033}
	{"level":"info","ts":"2024-09-19T18:54:35.479457Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2033,"took":"15.735537ms","hash":3624308866,"current-db-size-bytes":6651904,"current-db-size":"6.7 MB","current-db-size-in-use-bytes":4227072,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-19T18:54:35.479504Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3624308866,"revision":2033,"compact-revision":1609}
	
	
	==> gcp-auth [8763c1c636d0e544cec68dd7fd43a6178da8c1609fed0cf08b900e90bcd721ae] <==
	2024/09/19 18:41:56 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:41:57 Ready to marshal response ...
	2024/09/19 18:41:57 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:00 Ready to marshal response ...
	2024/09/19 18:50:00 Ready to write response ...
	2024/09/19 18:50:06 Ready to marshal response ...
	2024/09/19 18:50:06 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:09 Ready to marshal response ...
	2024/09/19 18:50:09 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:50:59 Ready to marshal response ...
	2024/09/19 18:50:59 Ready to write response ...
	2024/09/19 18:51:33 Ready to marshal response ...
	2024/09/19 18:51:33 Ready to write response ...
	2024/09/19 18:51:42 Ready to marshal response ...
	2024/09/19 18:51:42 Ready to write response ...
	
	
	==> kernel <==
	 18:56:07 up  3:38,  0 users,  load average: 0.31, 0.23, 0.44
	Linux addons-685250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [28c707c30998afef00bf2e1d4d63027e3b714da58f6340a4e3f39f0fe7fc84ea] <==
	I0919 18:53:58.358299       1 main.go:299] handling current node
	I0919 18:54:08.351385       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:08.351436       1 main.go:299] handling current node
	I0919 18:54:18.353091       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:18.353150       1 main.go:299] handling current node
	I0919 18:54:28.350866       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:28.350907       1 main.go:299] handling current node
	I0919 18:54:38.355399       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:38.355443       1 main.go:299] handling current node
	I0919 18:54:48.350983       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:48.351021       1 main.go:299] handling current node
	I0919 18:54:58.351456       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:54:58.351505       1 main.go:299] handling current node
	I0919 18:55:08.355945       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:08.355985       1 main.go:299] handling current node
	I0919 18:55:18.353447       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:18.353491       1 main.go:299] handling current node
	I0919 18:55:28.353417       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:28.353453       1 main.go:299] handling current node
	I0919 18:55:38.351131       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:38.351185       1 main.go:299] handling current node
	I0919 18:55:48.351374       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:48.351408       1 main.go:299] handling current node
	I0919 18:55:58.358563       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 18:55:58.358598       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d48e736f52b3564c18b33785792dc68172b6579a9971f0e60784ba243e67d4bf] <==
	 > logger="UnhandledError"
	E0919 18:41:46.384826       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.77.71:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.77.71:443: connect: connection refused" logger="UnhandledError"
	I0919 18:41:46.398246       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0919 18:50:10.564173       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.569821       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:10.575508       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:25.576915       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:30.878332       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:31.884590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:32.891043       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:33.897594       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:34.904265       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:35.910640       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:36.916660       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:37.922615       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:38.928704       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0919 18:50:39.935718       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 18:50:59.939369       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.7.39"}
	I0919 18:51:21.107714       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0919 18:51:22.123982       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0919 18:51:39.581185       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.29:41094: read: connection reset by peer
	E0919 18:51:41.443959       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0919 18:51:42.224676       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 18:51:42.394849       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.136.235"}
	I0919 18:55:47.437366       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [4b38bddc95b37cfafc20074c495de5665cfbaf5c4a53c28feafa7a6ee92f1148] <==
	I0919 18:51:28.399705       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0919 18:51:31.445687       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:31.445728       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:41.905172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="8.511µs"
	W0919 18:51:42.682508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:42.682559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:51:43.580046       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-685250"
	I0919 18:51:43.942974       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0919 18:51:43.943019       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 18:51:44.345759       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0919 18:51:44.345799       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 18:51:52.413869       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0919 18:51:57.889795       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:51:57.889847       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:52:27.558659       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:52:27.558704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:24.382837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:24.382902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:53:58.320420       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:53:58.320480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:54:36.903837       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:54:36.903888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0919 18:55:34.730951       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0919 18:55:34.731007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0919 18:55:36.833056       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="10.157µs"
	
	
	==> kube-proxy [1577029617c1331fd2349f20f8dc5051ce5a12bf840e9ba2cf92af826c1e7c2d] <==
	I0919 18:39:47.957278       1 server_linux.go:66] "Using iptables proxy"
	I0919 18:39:49.044392       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 18:39:49.044560       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 18:39:49.357227       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 18:39:49.357310       1 server_linux.go:169] "Using iptables Proxier"
	I0919 18:39:49.437470       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 18:39:49.438149       1 server.go:483] "Version info" version="v1.31.1"
	I0919 18:39:49.438227       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 18:39:49.444383       1 config.go:199] "Starting service config controller"
	I0919 18:39:49.444434       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 18:39:49.444451       1 config.go:105] "Starting endpoint slice config controller"
	I0919 18:39:49.444468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 18:39:49.445015       1 config.go:328] "Starting node config controller"
	I0919 18:39:49.445038       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 18:39:49.544520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 18:39:49.544894       1 shared_informer.go:320] Caches are synced for service config
	I0919 18:39:49.545185       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a9c5d6500618ffef785def4b664db368412c3cb073da078b47b4782f26dca7ae] <==
	W0919 18:39:36.759688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0919 18:39:36.759698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:36.759716       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:36.759719       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 18:39:36.759767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0919 18:39:36.759715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.577548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0919 18:39:37.577594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.591157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0919 18:39:37.591194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.662233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0919 18:39:37.662283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691829       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0919 18:39:37.691889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.691841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0919 18:39:37.691945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.788039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0919 18:39:37.788093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.902881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0919 18:39:37.902929       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0919 18:39:37.943554       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0919 18:39:37.943606       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0919 18:39:37.964311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0919 18:39:37.964357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 18:39:40.957211       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 18:55:32 addons-685250 kubelet[1619]: E0919 18:55:32.355099    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:55:35 addons-685250 kubelet[1619]: E0919 18:55:35.354759    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.088901    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r9ph\" (UniqueName: \"kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph\") pod \"0041dcd9-b46b-406b-a78c-728fda2b92cc\" (UID: \"0041dcd9-b46b-406b-a78c-728fda2b92cc\") "
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.088962    1619 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir\") pod \"0041dcd9-b46b-406b-a78c-728fda2b92cc\" (UID: \"0041dcd9-b46b-406b-a78c-728fda2b92cc\") "
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.089372    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "0041dcd9-b46b-406b-a78c-728fda2b92cc" (UID: "0041dcd9-b46b-406b-a78c-728fda2b92cc"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.091431    1619 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph" (OuterVolumeSpecName: "kube-api-access-6r9ph") pod "0041dcd9-b46b-406b-a78c-728fda2b92cc" (UID: "0041dcd9-b46b-406b-a78c-728fda2b92cc"). InnerVolumeSpecName "kube-api-access-6r9ph". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.190039    1619 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0041dcd9-b46b-406b-a78c-728fda2b92cc-tmp-dir\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.190072    1619 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6r9ph\" (UniqueName: \"kubernetes.io/projected/0041dcd9-b46b-406b-a78c-728fda2b92cc-kube-api-access-6r9ph\") on node \"addons-685250\" DevicePath \"\""
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.941930    1619 scope.go:117] "RemoveContainer" containerID="3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47"
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.958940    1619 scope.go:117] "RemoveContainer" containerID="3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47"
	Sep 19 18:55:38 addons-685250 kubelet[1619]: E0919 18:55:38.959440    1619 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47\": container with ID starting with 3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47 not found: ID does not exist" containerID="3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47"
	Sep 19 18:55:38 addons-685250 kubelet[1619]: I0919 18:55:38.959478    1619 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47"} err="failed to get container status \"3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47\": rpc error: code = NotFound desc = could not find container \"3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47\": container with ID starting with 3def0c19497bb9d4281de3fde17e1803880d219071a41edf14d086fcb4db5a47 not found: ID does not exist"
	Sep 19 18:55:39 addons-685250 kubelet[1619]: I0919 18:55:39.354797    1619 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0041dcd9-b46b-406b-a78c-728fda2b92cc" path="/var/lib/kubelet/pods/0041dcd9-b46b-406b-a78c-728fda2b92cc/volumes"
	Sep 19 18:55:39 addons-685250 kubelet[1619]: E0919 18:55:39.354849    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:55:39 addons-685250 kubelet[1619]: E0919 18:55:39.656893    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772139656699165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:39 addons-685250 kubelet[1619]: E0919 18:55:39.656938    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772139656699165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:46 addons-685250 kubelet[1619]: E0919 18:55:46.354587    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="ebd6539d-2dc6-46b7-8766-cd26ce5e6547"
	Sep 19 18:55:46 addons-685250 kubelet[1619]: E0919 18:55:46.354678    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="337122f1-f839-443e-89c9-ab116e67ccad"
	Sep 19 18:55:49 addons-685250 kubelet[1619]: E0919 18:55:49.659012    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772149658739231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:49 addons-685250 kubelet[1619]: E0919 18:55:49.659046    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772149658739231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:51 addons-685250 kubelet[1619]: E0919 18:55:51.354856    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	Sep 19 18:55:59 addons-685250 kubelet[1619]: E0919 18:55:59.660781    1619 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772159660591350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:55:59 addons-685250 kubelet[1619]: E0919 18:55:59.660815    1619 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772159660591350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:522996,},InodesUsed:&UInt64Value{Value:211,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 18:56:02 addons-685250 kubelet[1619]: I0919 18:56:02.353438    1619 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-xxkrh" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 18:56:02 addons-685250 kubelet[1619]: E0919 18:56:02.354512    1619 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c9e71acf-38e0-445c-9d8f-3735cbf69aa1"
	
	
	==> storage-provisioner [c265d33c64155de4fde21bb6eae221bdd5a2524b7a15aa0b673f23ce4f17b12d] <==
	I0919 18:40:29.640679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 18:40:29.648412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 18:40:29.648464       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 18:40:29.655439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 18:40:29.655525       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a3690d0-7216-4b96-a260-4e04cffeb393", APIVersion:"v1", ResourceVersion:"963", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-685250_e66922b4-9073-4377-9148-47e4da8ece38 became leader
	I0919 18:40:29.655628       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	I0919 18:40:29.756484       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-685250_e66922b4-9073-4377-9148-47e4da8ece38!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-685250 -n addons-685250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-685250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1 (81.681805ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:41:57 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pbctc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pbctc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-685250
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m59s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:51:42 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8nj8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w8nj8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  4m26s                  default-scheduler  Successfully assigned default/nginx to addons-685250
	  Warning  Failed     2m23s (x2 over 3m55s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x3 over 3m55s)    kubelet            Error: ErrImagePull
	  Warning  Failed     59s                    kubelet            Failed to pull image "docker.io/nginx:alpine": loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    22s (x5 over 3m55s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     22s (x5 over 3m55s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    8s (x4 over 4m26s)     kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-685250/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 18:50:06 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzftq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-mzftq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-685250
	  Warning  Failed     5m16s                kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m54s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m7s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     96s (x4 over 5m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     96s (x2 over 4m31s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s (x6 over 5m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    59s (x8 over 5m15s)  kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rqqsb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-zkk9z" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-685250 describe pod busybox nginx task-pv-pod ingress-nginx-admission-create-rqqsb ingress-nginx-admission-patch-zkk9z: exit status 1
--- FAIL: TestAddons/parallel/CSI (368.80s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7b6159a5-816c-4716-a381-e69bd618498d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004307233s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-141069 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-141069 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-141069 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-141069 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a92b6612-8ecf-46c0-a205-78bd8970a8ff] Pending
helpers_test.go:344: "sp-pod" [a92b6612-8ecf-46c0-a205-78bd8970a8ff] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-141069 -n functional-141069
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-09-19 19:06:44.898546111 +0000 UTC m=+1674.801505932
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-141069 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-141069 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141069/192.168.49.2
Start Time:       Thu, 19 Sep 2024 19:03:44 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:  10.244.0.7
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snblv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-snblv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-141069
Normal   Pulling    107s (x2 over 3m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     22s (x2 over 2m1s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     22s (x2 over 2m1s)  kubelet            Error: ErrImagePull
Normal   BackOff    7s (x2 over 2m1s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     7s (x2 over 2m1s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-141069 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-141069 logs sp-pod -n default: exit status 1 (67.479512ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-141069 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-141069
helpers_test.go:235: (dbg) docker inspect functional-141069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d",
	        "Created": "2024-09-19T19:00:42.069492015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 787785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T19:00:42.171416691Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/hosts",
	        "LogPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d-json.log",
	        "Name": "/functional-141069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-141069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-141069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-141069",
	                "Source": "/var/lib/docker/volumes/functional-141069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-141069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-141069",
	                "name.minikube.sigs.k8s.io": "functional-141069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ec2f7b9e0c5eb4a90fae3dc01f14f82c8a6578c2202086bb6332dca95c8bbf3",
	            "SandboxKey": "/var/run/docker/netns/6ec2f7b9e0c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-141069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c8bf9aa4d95cc71ee82325159735e24055a6e85e8a29a25a723047410b480f15",
	                    "EndpointID": "8852d0b76c2b82b7e76dae5279b6fe08d4fde343ebedf4f9f4eace86902bc0a1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-141069",
	                        "64286a6db173"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-141069 -n functional-141069
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 logs -n 25: (1.407390909s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-141069                                                       | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo cat                                             | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | /etc/test/nested/copy/760079/hosts                                         |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| image          | functional-141069 image load --daemon                                      | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image load --daemon                                      | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image save kicbase/echo-server:functional-141069         | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image rm                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image load                                               | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh pgrep                                                | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-141069 image build -t                                           | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | localhost/my-image:functional-141069                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:04:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:04:23.115980  802119 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:04:23.116103  802119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:23.116113  802119 out.go:358] Setting ErrFile to fd 2...
	I0919 19:04:23.116118  802119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:23.116323  802119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:04:23.116950  802119 out.go:352] Setting JSON to false
	I0919 19:04:23.118048  802119 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13613,"bootTime":1726759050,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:04:23.118147  802119 start.go:139] virtualization: kvm guest
	I0919 19:04:23.120402  802119 out.go:177] * [functional-141069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:04:23.121709  802119 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:04:23.121727  802119 notify.go:220] Checking for updates...
	I0919 19:04:23.124331  802119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:04:23.125653  802119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 19:04:23.127435  802119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 19:04:23.128665  802119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:04:23.129712  802119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:04:23.131291  802119 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:04:23.131883  802119 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:04:23.156520  802119 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:04:23.156616  802119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:23.211022  802119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:23.200070259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:23.211170  802119 docker.go:318] overlay module found
	I0919 19:04:23.213160  802119 out.go:177] * Using the docker driver based on existing profile
	I0919 19:04:23.214393  802119 start.go:297] selected driver: docker
	I0919 19:04:23.214412  802119 start.go:901] validating driver "docker" against &{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:23.214534  802119 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:04:23.214669  802119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:23.266739  802119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:23.257269643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:23.267495  802119 cni.go:84] Creating CNI manager for ""
	I0919 19:04:23.267551  802119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 19:04:23.267611  802119 start.go:340] cluster config:
	{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:23.269322  802119 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.098321244Z" level=info msg="Image localhost/kicbase/echo-server:functional-141069 not found" id=5484b680-09e8-47e2-b1a1-36882e27f1f9 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.525315538Z" level=info msg="Checking image status: kicbase/echo-server:functional-141069" id=1811974b-69e8-4df3-85e4-d0c3b5629ee2 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.557765062Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-141069" id=451e58c8-2fb2-47ae-b0fa-e24f99f54d02 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.557975678Z" level=info msg="Image docker.io/kicbase/echo-server:functional-141069 not found" id=451e58c8-2fb2-47ae-b0fa-e24f99f54d02 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.589524356Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-141069" id=845dab80-7eb1-4ffc-89c5-44628bb269f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:26 functional-141069 crio[5591]: time="2024-09-19 19:05:26.589741892Z" level=info msg="Image localhost/kicbase/echo-server:functional-141069 not found" id=845dab80-7eb1-4ffc-89c5-44628bb269f1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:27 functional-141069 crio[5591]: time="2024-09-19 19:05:27.777053899Z" level=info msg="Checking image status: kicbase/echo-server:functional-141069" id=3c189a6c-3bd2-400a-ab68-66214759f4bd name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:27 functional-141069 crio[5591]: time="2024-09-19 19:05:27.809623278Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-141069" id=8ab46312-1bce-410c-a031-edc00184b9b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:27 functional-141069 crio[5591]: time="2024-09-19 19:05:27.809906185Z" level=info msg="Image docker.io/kicbase/echo-server:functional-141069 not found" id=8ab46312-1bce-410c-a031-edc00184b9b8 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:27 functional-141069 crio[5591]: time="2024-09-19 19:05:27.840926598Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-141069" id=a602bd04-3276-476f-8b0f-e087b09259cc name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:27 functional-141069 crio[5591]: time="2024-09-19 19:05:27.841115767Z" level=info msg="Image localhost/kicbase/echo-server:functional-141069 not found" id=a602bd04-3276-476f-8b0f-e087b09259cc name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:30 functional-141069 crio[5591]: time="2024-09-19 19:05:30.252464602Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=12f16ed8-f4e9-4d81-94f5-1aa25893bc07 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:30 functional-141069 crio[5591]: time="2024-09-19 19:05:30.252749221Z" level=info msg="Image docker.io/nginx:alpine not found" id=12f16ed8-f4e9-4d81-94f5-1aa25893bc07 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:41 functional-141069 crio[5591]: time="2024-09-19 19:05:41.252818216Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3dd98553-7194-4ea2-92b4-1eea839db340 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:41 functional-141069 crio[5591]: time="2024-09-19 19:05:41.253054013Z" level=info msg="Image docker.io/nginx:alpine not found" id=3dd98553-7194-4ea2-92b4-1eea839db340 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:52 functional-141069 crio[5591]: time="2024-09-19 19:05:52.239586995Z" level=info msg="Pulling image: docker.io/nginx:latest" id=62a5b72f-3d06-4f48-a0f7-67b012323717 name=/runtime.v1.ImageService/PullImage
	Sep 19 19:05:52 functional-141069 crio[5591]: time="2024-09-19 19:05:52.256052220Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 19 19:05:52 functional-141069 crio[5591]: time="2024-09-19 19:05:52.758848974Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=e344f9c7-d1b5-44e3-ac23-22faca3d826d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:05:52 functional-141069 crio[5591]: time="2024-09-19 19:05:52.759112391Z" level=info msg="Image docker.io/mysql:5.7 not found" id=e344f9c7-d1b5-44e3-ac23-22faca3d826d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:06:05 functional-141069 crio[5591]: time="2024-09-19 19:06:05.252277383Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4d02802a-f392-464a-aced-d33089df12bd name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:06:05 functional-141069 crio[5591]: time="2024-09-19 19:06:05.252469128Z" level=info msg="Image docker.io/mysql:5.7 not found" id=4d02802a-f392-464a-aced-d33089df12bd name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:06:22 functional-141069 crio[5591]: time="2024-09-19 19:06:22.934173008Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=21b4e681-a367-4638-a5da-db039ef509d1 name=/runtime.v1.ImageService/PullImage
	Sep 19 19:06:22 functional-141069 crio[5591]: time="2024-09-19 19:06:22.938567933Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 19 19:06:37 functional-141069 crio[5591]: time="2024-09-19 19:06:37.252920618Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c9d0979f-dc14-4fa2-8b2d-9620293ab19e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:06:37 functional-141069 crio[5591]: time="2024-09-19 19:06:37.253177106Z" level=info msg="Image docker.io/nginx:latest not found" id=c9d0979f-dc14-4fa2-8b2d-9620293ab19e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	a9db194c74170       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   8c2a98ff69832       kubernetes-dashboard-695b96c756-h57zq
	05899174856a7       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   47fbe5fb31268       dashboard-metrics-scraper-c5db448b4-mhdfb
	2474c26a79e66       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago        Exited              mount-munger                0                   c24b2461df330       busybox-mount
	3be6c9386c0a1       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   7c8b74aebd312       hello-node-6b9f76b5c7-grt5w
	5916651c7e043       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   53d2b97e793bb       hello-node-connect-67bdd5bbb4-llvnt
	46eae5a7c6154       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago        Running             coredns                     3                   d0db000191c9d       coredns-7c65d6cfc9-jsgn7
	d26b6e8f948dd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 3 minutes ago        Running             kindnet-cni                 3                   234446f0c3aea       kindnet-6vwt2
	eccfad4b0b156       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 3 minutes ago        Running             kube-proxy                  3                   2ee6f4f424c1b       kube-proxy-s7zj9
	23dbcb40611a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         4                   fae1b0fada010       storage-provisioner
	34ec523c69e36       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 3 minutes ago        Running             kube-apiserver              0                   da4489f4bb536       kube-apiserver-functional-141069
	b6465a618542d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 3 minutes ago        Running             kube-scheduler              3                   e2bb0608b6ebd       kube-scheduler-functional-141069
	f1ad386107b94       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 3 minutes ago        Running             kube-controller-manager     3                   c6d19e4d3421c       kube-controller-manager-functional-141069
	c187be7729ad2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago        Running             etcd                        3                   a25d700cd7456       etcd-functional-141069
	a6e87ed34ac16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         3                   fae1b0fada010       storage-provisioner
	f93558a9ee62c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago        Exited              coredns                     2                   d0db000191c9d       coredns-7c65d6cfc9-jsgn7
	849c4710abbe5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago        Exited              etcd                        2                   a25d700cd7456       etcd-functional-141069
	006f94edf102d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 4 minutes ago        Exited              kube-scheduler              2                   e2bb0608b6ebd       kube-scheduler-functional-141069
	ffb5bc44aeb84       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 4 minutes ago        Exited              kube-controller-manager     2                   c6d19e4d3421c       kube-controller-manager-functional-141069
	d4c1e18d5b680       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 4 minutes ago        Exited              kube-proxy                  2                   2ee6f4f424c1b       kube-proxy-s7zj9
	6f4fb1b664ea6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 4 minutes ago        Exited              kindnet-cni                 2                   234446f0c3aea       kindnet-6vwt2
	
	
	==> coredns [46eae5a7c6154615c0a652f87ce896bce4536c077459a12b87cc5dfc6be0d30f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40056 - 4851 "HINFO IN 4911277686334584114.634317722569930143. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019995881s
	
	
	==> coredns [f93558a9ee62c4da6dbc743766c097e163203f09f08b4e7cac573c2e727f8d2d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47190 - 7219 "HINFO IN 6182279019942428577.1828839238233307805. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017029646s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-141069
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-141069
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=functional-141069
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_00_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:00:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-141069
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:06:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:05:45 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:05:45 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:05:45 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:05:45 +0000   Thu, 19 Sep 2024 19:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-141069
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 95f6fe0044074469a9e63885e0bd760d
	  System UUID:                f91a398c-4f58-4006-a0a5-af2d4cd2e27b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-grt5w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     hello-node-connect-67bdd5bbb4-llvnt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     mysql-6cdb49bbb-fwxgw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     116s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-jsgn7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m45s
	  kube-system                 etcd-functional-141069                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m50s
	  kube-system                 kindnet-6vwt2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m45s
	  kube-system                 kube-apiserver-functional-141069             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-141069    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 kube-proxy-s7zj9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m45s
	  kube-system                 kube-scheduler-functional-141069             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-mhdfb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-h57zq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m43s                  kube-proxy       
	  Normal   Starting                 3m33s                  kube-proxy       
	  Normal   Starting                 4m11s                  kube-proxy       
	  Normal   Starting                 5m56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m56s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m51s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m51s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m50s                  kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m50s                  kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m50s                  kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m46s                  node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	  Normal   NodeReady                5m4s                   kubelet          Node functional-141069 status is now: NodeReady
	  Warning  ContainerGCFailed        4m51s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m14s                  node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m38s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m32s                  node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	
	
	==> dmesg <==
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 19:04] FS-Cache: Duplicate cookie detected
	[  +0.004814] FS-Cache: O-cookie c=00000036 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006740] FS-Cache: O-cookie d=000000003d32a7f7{9P.session} n=000000008d2c1d93
	[  +0.007518] FS-Cache: O-key=[10] '34323938333031383536'
	[  +0.005349] FS-Cache: N-cookie c=00000037 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006665] FS-Cache: N-cookie d=000000003d32a7f7{9P.session} n=000000005d7d76c3
	[  +0.008916] FS-Cache: N-key=[10] '34323938333031383536'
	
	
	==> etcd [849c4710abbe5afd380b7564b1d93d183fef259cdbbb6909ced396be250d7803] <==
	{"level":"info","ts":"2024-09-19T19:02:28.246855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T19:02:28.246891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-19T19:02:28.246909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.249820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:02:28.250072Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:02:28.250126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T19:02:28.249837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:02:28.249832Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-141069 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:02:28.251261Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:02:28.251346Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:02:28.252736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-19T19:02:28.252731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T19:02:52.145216Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T19:02:52.145292Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-141069","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-19T19:02:52.145397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.145501Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.165060Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.165111Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T19:02:52.165171Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-19T19:02:52.167418Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:02:52.167553Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:02:52.167573Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-141069","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c187be7729ad28b438ef6ecb381c8d161a3d30f2aa92fc2c41b62adc7654960e] <==
	{"level":"info","ts":"2024-09-19T19:03:09.053015Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-19T19:03:09.053143Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:03:09.053077Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:03:09.053182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:03:09.055882Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T19:03:09.056045Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:03:09.056143Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:03:09.056174Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T19:03:09.056246Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T19:03:10.744548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.745879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:03:10.745896Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:03:10.745879Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-141069 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:03:10.746124Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:03:10.746176Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T19:03:10.746820Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:03:10.746935Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:03:10.747644Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-19T19:03:10.747805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:06:46 up  3:49,  0 users,  load average: 0.27, 0.51, 0.48
	Linux functional-141069 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f4fb1b664ea6519bcf463627f08bacfda441cdc5b4038a27fa434c3a065b4f3] <==
	W0919 19:02:17.845921       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:17.845967       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:19.383351       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:19.383401       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.179948       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.179991       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.322818       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.322872       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.990136       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.990183       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:23.805849       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:23.805896       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:24.259663       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:24.259698       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:25.146316       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:25.146368       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:25.485800       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:25.485853       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0919 19:02:35.857433       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0919 19:02:35.857469       1 metrics.go:61] Registering metrics
	I0919 19:02:35.857530       1 controller.go:374] Syncing nftables rules
	I0919 19:02:36.456640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:02:36.456679       1 main.go:299] handling current node
	I0919 19:02:46.457595       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:02:46.457662       1 main.go:299] handling current node
	
	
	==> kindnet [d26b6e8f948dd62d1c4b63c852c1fce8421d19aea8c6dd2d9322059bdbe43ad8] <==
	I0919 19:04:43.271425       1 main.go:299] handling current node
	I0919 19:04:53.265668       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:04:53.265746       1 main.go:299] handling current node
	I0919 19:05:03.267425       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:03.267460       1 main.go:299] handling current node
	I0919 19:05:13.264920       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:13.264964       1 main.go:299] handling current node
	I0919 19:05:23.264276       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:23.264309       1 main.go:299] handling current node
	I0919 19:05:33.264557       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:33.264620       1 main.go:299] handling current node
	I0919 19:05:43.264499       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:43.264572       1 main.go:299] handling current node
	I0919 19:05:53.265231       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:05:53.265269       1 main.go:299] handling current node
	I0919 19:06:03.271379       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:06:03.271415       1 main.go:299] handling current node
	I0919 19:06:13.264779       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:06:13.264814       1 main.go:299] handling current node
	I0919 19:06:23.268273       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:06:23.268311       1 main.go:299] handling current node
	I0919 19:06:33.271376       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:06:33.271409       1 main.go:299] handling current node
	I0919 19:06:43.267387       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:06:43.267447       1 main.go:299] handling current node
	
	
	==> kube-apiserver [34ec523c69e36d6dcdba816af0b779ec75bb98820fd4c1f8df436ef44a706f9d] <==
	I0919 19:03:11.836251       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:03:11.836307       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:03:11.836340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:03:11.836372       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:03:11.836115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:03:11.838969       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0919 19:03:11.841481       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 19:03:11.842699       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:03:12.678031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 19:03:13.695219       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 19:03:13.792397       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 19:03:13.803686       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 19:03:13.858621       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 19:03:13.864313       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 19:03:30.139874       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:03:33.076946       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.203.230"}
	I0919 19:03:33.085217       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 19:03:38.631182       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.36.227"}
	I0919 19:03:39.281379       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 19:03:39.368721       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.254.162"}
	I0919 19:03:39.511422       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.235.200"}
	I0919 19:04:24.243692       1 controller.go:615] quota admission added evaluator for: namespaces
	I0919 19:04:24.546595       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.165.61"}
	I0919 19:04:24.562577       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.101.114"}
	I0919 19:04:50.758816       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.102.176"}
	
	
	==> kube-controller-manager [f1ad386107b940be6b13e6128491e816dc896755be7b8d6fb397f22a83c02f70] <==
	E0919 19:04:24.339911       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 19:04:24.344368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.312972ms"
	E0919 19:04:24.344402       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 19:04:24.344620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.598164ms"
	E0919 19:04:24.344648       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0919 19:04:24.444119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="93.023195ms"
	I0919 19:04:24.450563       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="89.386246ms"
	I0919 19:04:24.458868       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="8.144401ms"
	I0919 19:04:24.459075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="39.017µs"
	I0919 19:04:24.459991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.810494ms"
	I0919 19:04:24.460077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.593µs"
	I0919 19:04:24.469193       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.128µs"
	I0919 19:04:43.914023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:04:50.801180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="9.638302ms"
	I0919 19:04:50.806343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="5.098734ms"
	I0919 19:04:50.806409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="37.98µs"
	I0919 19:04:50.808182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="46.004µs"
	I0919 19:05:14.652366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:05:17.715403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.950797ms"
	I0919 19:05:17.715489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="48.377µs"
	I0919 19:05:21.710007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.78228ms"
	I0919 19:05:21.710107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="53.558µs"
	I0919 19:05:45.132047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:05:52.768197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="71.817µs"
	I0919 19:06:05.260824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="89.677µs"
	
	
	==> kube-controller-manager [ffb5bc44aeb8403cce7208898412b27c5d35140d368c10269958b1c41225ae70] <==
	I0919 19:02:32.415263       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0919 19:02:32.415296       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 19:02:32.415321       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0919 19:02:32.415337       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0919 19:02:32.415411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:02:32.470665       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 19:02:32.509648       1 shared_informer.go:320] Caches are synced for PV protection
	I0919 19:02:32.509667       1 shared_informer.go:320] Caches are synced for expand
	I0919 19:02:32.515473       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:02:32.516387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="107.232193ms"
	I0919 19:02:32.516554       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 19:02:32.516591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.79µs"
	I0919 19:02:32.558130       1 shared_informer.go:320] Caches are synced for persistent volume
	I0919 19:02:32.558142       1 shared_informer.go:320] Caches are synced for stateful set
	I0919 19:02:32.558159       1 shared_informer.go:320] Caches are synced for PVC protection
	I0919 19:02:32.573562       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 19:02:32.587661       1 shared_informer.go:320] Caches are synced for endpoint
	I0919 19:02:32.602159       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:02:32.608588       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0919 19:02:32.658132       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0919 19:02:33.026666       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:02:33.058504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:02:33.058556       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 19:02:33.334378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.763778ms"
	I0919 19:02:33.334490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="77.621µs"
	
	
	==> kube-proxy [d4c1e18d5b680fa01f6333215fdcca5347b8af219200b2f7164895791f0a4b74] <==
	I0919 19:02:18.037468       1 server_linux.go:66] "Using iptables proxy"
	E0919 19:02:18.153525       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:19.264315       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:21.459833       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:26.194081       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0919 19:02:34.244162       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 19:02:34.244226       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:02:34.263792       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 19:02:34.263872       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:02:34.266023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:02:34.266354       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:02:34.266580       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:02:34.268931       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:02:34.269002       1 config.go:328] "Starting node config controller"
	I0919 19:02:34.269010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:02:34.268976       1 config.go:199] "Starting service config controller"
	I0919 19:02:34.269141       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:02:34.269070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:02:34.369254       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:02:34.369278       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:02:34.369322       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [eccfad4b0b156e1fb323a54215b597cbc5427631771a874d360f125cf051f49a] <==
	I0919 19:03:12.774668       1 server_linux.go:66] "Using iptables proxy"
	I0919 19:03:12.970949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 19:03:12.971024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:03:13.053154       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 19:03:13.053213       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:03:13.055059       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:03:13.055452       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:03:13.055493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:03:13.056538       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:03:13.056583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:03:13.056601       1 config.go:328] "Starting node config controller"
	I0919 19:03:13.056612       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:03:13.056631       1 config.go:199] "Starting service config controller"
	I0919 19:03:13.056641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:03:13.157139       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:03:13.157169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:03:13.157220       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [006f94edf102da345d1fcff9937a4c2cba46aa40e9c589e33e9b3268ba828754] <==
	E0919 19:02:27.439854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.502033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.502091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.539964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.540010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.553825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.553870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.649979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.650031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.683072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.683113       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.707917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.707952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.722709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.722740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.793060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.793116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:29.365360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 19:02:29.365420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 19:02:29.371384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 19:02:29.371437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:02:29.373860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 19:02:29.373888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:02:30.864375       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:02:52.146477       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b6465a618542d82e167dfa84727410efc0fe50bc1fd4bcf4db74b60d855b1409] <==
	I0919 19:03:09.480520       1 serving.go:386] Generated self-signed cert in-memory
	W0919 19:03:11.736045       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 19:03:11.736093       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 19:03:11.736107       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 19:03:11.736115       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 19:03:11.756289       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 19:03:11.756316       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:03:11.758162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 19:03:11.758211       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 19:03:11.758433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 19:03:11.758595       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 19:03:11.858948       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 19:05:30 functional-141069 kubelet[5956]: E0919 19:05:30.253017    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="d1ce905b-515b-4f60-aff6-1eeb2b5075af"
	Sep 19 19:05:38 functional-141069 kubelet[5956]: E0919 19:05:38.394472    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772738394254216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:05:38 functional-141069 kubelet[5956]: E0919 19:05:38.394517    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772738394254216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:05:48 functional-141069 kubelet[5956]: E0919 19:05:48.396118    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772748395902978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:05:48 functional-141069 kubelet[5956]: E0919 19:05:48.396164    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772748395902978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:05:52 functional-141069 kubelet[5956]: E0919 19:05:52.239043    5956 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 19 19:05:52 functional-141069 kubelet[5956]: E0919 19:05:52.239117    5956 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 19 19:05:52 functional-141069 kubelet[5956]: E0919 19:05:52.239387    5956 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hp9q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-fwxgw_default(6f666c87-a6c4-4e7e-803a-1dc8af345566): ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 19:05:52 functional-141069 kubelet[5956]: E0919 19:05:52.240581    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:05:52 functional-141069 kubelet[5956]: E0919 19:05:52.759385    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:05:58 functional-141069 kubelet[5956]: E0919 19:05:58.397696    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772758397456056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:05:58 functional-141069 kubelet[5956]: E0919 19:05:58.397735    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772758397456056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:08 functional-141069 kubelet[5956]: E0919 19:06:08.399157    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772768398969041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:08 functional-141069 kubelet[5956]: E0919 19:06:08.399204    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772768398969041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:18 functional-141069 kubelet[5956]: E0919 19:06:18.400647    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772778400464430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:18 functional-141069 kubelet[5956]: E0919 19:06:18.400697    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772778400464430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:22 functional-141069 kubelet[5956]: E0919 19:06:22.933658    5956 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 19 19:06:22 functional-141069 kubelet[5956]: E0919 19:06:22.933730    5956 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 19 19:06:22 functional-141069 kubelet[5956]: E0919 19:06:22.933952    5956 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snblv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(a92b6612-8ecf-46c0-a205-78bd8970a8ff): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 19:06:22 functional-141069 kubelet[5956]: E0919 19:06:22.935361    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="a92b6612-8ecf-46c0-a205-78bd8970a8ff"
	Sep 19 19:06:28 functional-141069 kubelet[5956]: E0919 19:06:28.402141    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772788401976506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:28 functional-141069 kubelet[5956]: E0919 19:06:28.402184    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772788401976506,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:37 functional-141069 kubelet[5956]: E0919 19:06:37.253463    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="a92b6612-8ecf-46c0-a205-78bd8970a8ff"
	Sep 19 19:06:38 functional-141069 kubelet[5956]: E0919 19:06:38.403634    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772798403448027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:06:38 functional-141069 kubelet[5956]: E0919 19:06:38.403678    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726772798403448027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [a9db194c7417074f05eefd6d4880ffd3724012b383d3d9bbcbe68752ec751b14] <==
	2024/09/19 19:05:21 Starting overwatch
	2024/09/19 19:05:21 Using namespace: kubernetes-dashboard
	2024/09/19 19:05:21 Using in-cluster config to connect to apiserver
	2024/09/19 19:05:21 Using secret token for csrf signing
	2024/09/19 19:05:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/19 19:05:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/19 19:05:21 Successful initial request to the apiserver, version: v1.31.1
	2024/09/19 19:05:21 Generating JWE encryption key
	2024/09/19 19:05:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/19 19:05:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/19 19:05:21 Initializing JWE encryption key from synchronized object
	2024/09/19 19:05:21 Creating in-cluster Sidecar client
	2024/09/19 19:05:21 Successful request to sidecar
	2024/09/19 19:05:21 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [23dbcb40611a7af3ef6dd30657a99f22746cd91cefa5ae67952d1cabf5c9bd24] <==
	I0919 19:03:12.675768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 19:03:12.743442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 19:03:12.743514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 19:03:30.143692       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 19:03:30.143808       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8!
	I0919 19:03:30.143802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f500cfc-5004-4575-a4e8-04e5a0447dd7", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8 became leader
	I0919 19:03:30.244234       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8!
	I0919 19:03:44.436841       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0919 19:03:44.437055       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78595a86-c89f-4050-9ad0-31fbf426503c", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0919 19:03:44.436924       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9532a489-7d52-48c9-a8d3-d2bf17ba5926 382 0 2024-09-19 19:01:01 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-19 19:01:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-78595a86-c89f-4050-9ad0-31fbf426503c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  78595a86-c89f-4050-9ad0-31fbf426503c 784 0 2024-09-19 19:03:44 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-19 19:03:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-19 19:03:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0919 19:03:44.437394       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c" provisioned
	I0919 19:03:44.437419       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0919 19:03:44.437426       1 volume_store.go:212] Trying to save persistentvolume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c"
	I0919 19:03:44.444067       1 volume_store.go:219] persistentvolume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c" saved
	I0919 19:03:44.445864       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78595a86-c89f-4050-9ad0-31fbf426503c", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-78595a86-c89f-4050-9ad0-31fbf426503c
	
	
	==> storage-provisioner [a6e87ed34ac16dc8dc24d090d91c2687d421929e0259ce16f1250b512f05abbd] <==
	I0919 19:02:34.010385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 19:02:34.017669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 19:02:34.017714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 19:02:51.414246       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 19:02:51.414310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f500cfc-5004-4575-a4e8-04e5a0447dd7", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586 became leader
	I0919 19:02:51.414394       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586!
	I0919 19:02:51.515237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-141069 -n functional-141069
helpers_test.go:261: (dbg) Run:  kubectl --context functional-141069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-141069 describe pod busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-141069 describe pod busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:04:23 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2474c26a79e6637990374cd6154b3b4203a4802ebc130ab9bbf8a3ab9eac5a93
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 19 Sep 2024 19:04:44 +0000
	      Finished:     Thu, 19 Sep 2024 19:04:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfh8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rfh8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m24s  default-scheduler  Successfully assigned default/busybox-mount to functional-141069
	  Normal  Pulling    2m24s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m3s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.001s (20.903s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m3s   kubelet            Created container mount-munger
	  Normal  Started    2m3s   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-fwxgw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:04:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9q5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  117s                default-scheduler  Successfully assigned default/mysql-6cdb49bbb-fwxgw to functional-141069
	  Warning  Failed     55s                 kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     55s                 kubelet            Error: ErrImagePull
	  Normal   BackOff    55s                 kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     55s                 kubelet            Error: ImagePullBackOff
	  Normal   Pulling    42s (x2 over 116s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:03:38 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gglk6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gglk6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m9s                 default-scheduler  Successfully assigned default/nginx-svc to functional-141069
	  Warning  Failed     2m37s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     92s (x2 over 2m37s)  kubelet            Error: ErrImagePull
	  Warning  Failed     92s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    77s (x2 over 2m37s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     77s (x2 over 2m37s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    66s (x3 over 3m9s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:03:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snblv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-snblv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-141069
	  Normal   Pulling    110s (x2 over 3m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     25s (x2 over 2m4s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     25s (x2 over 2m4s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x2 over 2m4s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x2 over 2m4s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E0919 19:06:57.207998  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:07:24.913729  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-141069 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-fwxgw" [6f666c87-a6c4-4e7e-803a-1dc8af345566] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2024/09/19 19:05:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-141069 -n functional-141069
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-09-19 19:14:51.085438078 +0000 UTC m=+2160.988397908
functional_test.go:1799: (dbg) Run:  kubectl --context functional-141069 describe po mysql-6cdb49bbb-fwxgw -n default
functional_test.go:1799: (dbg) kubectl --context functional-141069 describe po mysql-6cdb49bbb-fwxgw -n default:
Name:             mysql-6cdb49bbb-fwxgw
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141069/192.168.49.2
Start Time:       Thu, 19 Sep 2024 19:04:50 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9q5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hp9q5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-fwxgw to functional-141069
Warning  Failed     5m49s (x2 over 7m27s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    4m57s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     4m26s (x2 over 8m59s)  kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     4m26s (x4 over 8m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    3m58s (x7 over 8m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     3m58s (x7 over 8m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-141069 logs mysql-6cdb49bbb-fwxgw -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-141069 logs mysql-6cdb49bbb-fwxgw -n default: exit status 1 (68.623762ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-fwxgw" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-141069 logs mysql-6cdb49bbb-fwxgw -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-141069
helpers_test.go:235: (dbg) docker inspect functional-141069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d",
	        "Created": "2024-09-19T19:00:42.069492015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 787785,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-19T19:00:42.171416691Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/hosts",
	        "LogPath": "/var/lib/docker/containers/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d/64286a6db1733b30e58ed37618c122199b2932f3f711bbedbf043d4175d87f6d-json.log",
	        "Name": "/functional-141069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-141069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-141069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0-init/diff:/var/lib/docker/overlay2/71eee05749e16aef5497ee0d3682f846917f1ee6949d544cdec1fff2723452d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2af91857340957ad757a3bd78f5614c40dcf3034acf08d9339486ce56c1aaab0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-141069",
	                "Source": "/var/lib/docker/volumes/functional-141069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-141069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-141069",
	                "name.minikube.sigs.k8s.io": "functional-141069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ec2f7b9e0c5eb4a90fae3dc01f14f82c8a6578c2202086bb6332dca95c8bbf3",
	            "SandboxKey": "/var/run/docker/netns/6ec2f7b9e0c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33528"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33529"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-141069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c8bf9aa4d95cc71ee82325159735e24055a6e85e8a29a25a723047410b480f15",
	                    "EndpointID": "8852d0b76c2b82b7e76dae5279b6fe08d4fde343ebedf4f9f4eace86902bc0a1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-141069",
	                        "64286a6db173"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-141069 -n functional-141069
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 logs -n 25: (1.408077769s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount1                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount2                                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh findmnt                                              | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | -T /mount3                                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-141069                                                       | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC |                     |
	|                | --kill=true                                                                |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo cat                                             | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:04 UTC | 19 Sep 24 19:04 UTC |
	|                | /etc/test/nested/copy/760079/hosts                                         |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | systemctl is-active docker                                                 |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh sudo                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | systemctl is-active containerd                                             |                   |         |         |                     |                     |
	| image          | functional-141069 image load --daemon                                      | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image load --daemon                                      | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image save kicbase/echo-server:functional-141069         | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image rm                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | kicbase/echo-server:functional-141069                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069 image load                                               | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-141069 ssh pgrep                                                | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-141069 image build -t                                           | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | localhost/my-image:functional-141069                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-141069 image ls                                                 | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-141069                                                          | functional-141069 | jenkins | v1.34.0 | 19 Sep 24 19:05 UTC | 19 Sep 24 19:05 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 19:04:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 19:04:23.115980  802119 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:04:23.116103  802119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:23.116113  802119 out.go:358] Setting ErrFile to fd 2...
	I0919 19:04:23.116118  802119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:23.116323  802119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:04:23.116950  802119 out.go:352] Setting JSON to false
	I0919 19:04:23.118048  802119 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13613,"bootTime":1726759050,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:04:23.118147  802119 start.go:139] virtualization: kvm guest
	I0919 19:04:23.120402  802119 out.go:177] * [functional-141069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:04:23.121709  802119 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:04:23.121727  802119 notify.go:220] Checking for updates...
	I0919 19:04:23.124331  802119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:04:23.125653  802119 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 19:04:23.127435  802119 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 19:04:23.128665  802119 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:04:23.129712  802119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:04:23.131291  802119 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:04:23.131883  802119 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:04:23.156520  802119 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:04:23.156616  802119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:23.211022  802119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:23.200070259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:23.211170  802119 docker.go:318] overlay module found
	I0919 19:04:23.213160  802119 out.go:177] * Using the docker driver based on existing profile
	I0919 19:04:23.214393  802119 start.go:297] selected driver: docker
	I0919 19:04:23.214412  802119 start.go:901] validating driver "docker" against &{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:23.214534  802119 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:04:23.214669  802119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:23.266739  802119 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:23.257269643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:23.267495  802119 cni.go:84] Creating CNI manager for ""
	I0919 19:04:23.267551  802119 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 19:04:23.267611  802119 start.go:340] cluster config:
	{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:23.269322  802119 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 19:13:37 functional-141069 crio[5591]: time="2024-09-19 19:13:37.253231184Z" level=info msg="Image docker.io/mysql:5.7 not found" id=961f69d5-d099-464c-af75-1a76604a3197 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:44 functional-141069 crio[5591]: time="2024-09-19 19:13:44.252804648Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=46d81ce3-1cf1-45f5-b96e-245c00079f41 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:44 functional-141069 crio[5591]: time="2024-09-19 19:13:44.253076968Z" level=info msg="Image docker.io/nginx:alpine not found" id=46d81ce3-1cf1-45f5-b96e-245c00079f41 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:44 functional-141069 crio[5591]: time="2024-09-19 19:13:44.253558849Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=2439be75-cebb-4c3b-ada9-19c71876a3e8 name=/runtime.v1.ImageService/PullImage
	Sep 19 19:13:44 functional-141069 crio[5591]: time="2024-09-19 19:13:44.268716812Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Sep 19 19:13:45 functional-141069 crio[5591]: time="2024-09-19 19:13:45.252761204Z" level=info msg="Checking image status: docker.io/nginx:latest" id=752c2008-4392-4d0f-8030-5ac5943b2bfb name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:45 functional-141069 crio[5591]: time="2024-09-19 19:13:45.252966351Z" level=info msg="Image docker.io/nginx:latest not found" id=752c2008-4392-4d0f-8030-5ac5943b2bfb name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:52 functional-141069 crio[5591]: time="2024-09-19 19:13:52.252688033Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=3d8f2e97-1935-41af-91b5-4162d436bd19 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:13:52 functional-141069 crio[5591]: time="2024-09-19 19:13:52.252933558Z" level=info msg="Image docker.io/mysql:5.7 not found" id=3d8f2e97-1935-41af-91b5-4162d436bd19 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:00 functional-141069 crio[5591]: time="2024-09-19 19:14:00.252300531Z" level=info msg="Checking image status: docker.io/nginx:latest" id=5392752b-a7ed-4a37-b0cc-03ee2f4e749d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:00 functional-141069 crio[5591]: time="2024-09-19 19:14:00.252583596Z" level=info msg="Image docker.io/nginx:latest not found" id=5392752b-a7ed-4a37-b0cc-03ee2f4e749d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:05 functional-141069 crio[5591]: time="2024-09-19 19:14:05.252842599Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=b0c1a7ac-e55f-4251-aed3-c07fb5bee410 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:05 functional-141069 crio[5591]: time="2024-09-19 19:14:05.253149118Z" level=info msg="Image docker.io/mysql:5.7 not found" id=b0c1a7ac-e55f-4251-aed3-c07fb5bee410 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:15 functional-141069 crio[5591]: time="2024-09-19 19:14:15.252129651Z" level=info msg="Checking image status: docker.io/nginx:latest" id=ef71e00a-c77d-4a4e-91bb-2d169586413a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:15 functional-141069 crio[5591]: time="2024-09-19 19:14:15.252380734Z" level=info msg="Image docker.io/nginx:latest not found" id=ef71e00a-c77d-4a4e-91bb-2d169586413a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:20 functional-141069 crio[5591]: time="2024-09-19 19:14:20.251931235Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=1b740597-8d16-458f-82eb-1d6a16ec005d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:20 functional-141069 crio[5591]: time="2024-09-19 19:14:20.252213743Z" level=info msg="Image docker.io/mysql:5.7 not found" id=1b740597-8d16-458f-82eb-1d6a16ec005d name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:29 functional-141069 crio[5591]: time="2024-09-19 19:14:29.375639063Z" level=info msg="Pulling image: docker.io/nginx:latest" id=6123a670-ed81-4781-8947-02a97fcc592a name=/runtime.v1.ImageService/PullImage
	Sep 19 19:14:29 functional-141069 crio[5591]: time="2024-09-19 19:14:29.376961033Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Sep 19 19:14:33 functional-141069 crio[5591]: time="2024-09-19 19:14:33.251914030Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=d475aafb-0153-43b5-8074-19c036d3750e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:33 functional-141069 crio[5591]: time="2024-09-19 19:14:33.252143227Z" level=info msg="Image docker.io/mysql:5.7 not found" id=d475aafb-0153-43b5-8074-19c036d3750e name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:41 functional-141069 crio[5591]: time="2024-09-19 19:14:41.252545460Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=79aca6d5-d2c1-4baf-af05-9518cbcf2ee1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:41 functional-141069 crio[5591]: time="2024-09-19 19:14:41.252782320Z" level=info msg="Image docker.io/nginx:alpine not found" id=79aca6d5-d2c1-4baf-af05-9518cbcf2ee1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:46 functional-141069 crio[5591]: time="2024-09-19 19:14:46.252718002Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=08160861-5cbc-408e-b1bd-aad445f178c4 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 19:14:46 functional-141069 crio[5591]: time="2024-09-19 19:14:46.253059816Z" level=info msg="Image docker.io/mysql:5.7 not found" id=08160861-5cbc-408e-b1bd-aad445f178c4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a9db194c74170       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   8c2a98ff69832       kubernetes-dashboard-695b96c756-h57zq
	05899174856a7       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   47fbe5fb31268       dashboard-metrics-scraper-c5db448b4-mhdfb
	2474c26a79e66       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   c24b2461df330       busybox-mount
	3be6c9386c0a1       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   7c8b74aebd312       hello-node-6b9f76b5c7-grt5w
	5916651c7e043       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   53d2b97e793bb       hello-node-connect-67bdd5bbb4-llvnt
	46eae5a7c6154       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Running             coredns                     3                   d0db000191c9d       coredns-7c65d6cfc9-jsgn7
	d26b6e8f948dd       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 11 minutes ago      Running             kindnet-cni                 3                   234446f0c3aea       kindnet-6vwt2
	eccfad4b0b156       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 11 minutes ago      Running             kube-proxy                  3                   2ee6f4f424c1b       kube-proxy-s7zj9
	23dbcb40611a7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         4                   fae1b0fada010       storage-provisioner
	34ec523c69e36       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                 11 minutes ago      Running             kube-apiserver              0                   da4489f4bb536       kube-apiserver-functional-141069
	b6465a618542d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 11 minutes ago      Running             kube-scheduler              3                   e2bb0608b6ebd       kube-scheduler-functional-141069
	f1ad386107b94       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 11 minutes ago      Running             kube-controller-manager     3                   c6d19e4d3421c       kube-controller-manager-functional-141069
	c187be7729ad2       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Running             etcd                        3                   a25d700cd7456       etcd-functional-141069
	a6e87ed34ac16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Exited              storage-provisioner         3                   fae1b0fada010       storage-provisioner
	f93558a9ee62c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 12 minutes ago      Exited              coredns                     2                   d0db000191c9d       coredns-7c65d6cfc9-jsgn7
	849c4710abbe5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 12 minutes ago      Exited              etcd                        2                   a25d700cd7456       etcd-functional-141069
	006f94edf102d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                 12 minutes ago      Exited              kube-scheduler              2                   e2bb0608b6ebd       kube-scheduler-functional-141069
	ffb5bc44aeb84       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                 12 minutes ago      Exited              kube-controller-manager     2                   c6d19e4d3421c       kube-controller-manager-functional-141069
	d4c1e18d5b680       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                 12 minutes ago      Exited              kube-proxy                  2                   2ee6f4f424c1b       kube-proxy-s7zj9
	6f4fb1b664ea6       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                 12 minutes ago      Exited              kindnet-cni                 2                   234446f0c3aea       kindnet-6vwt2
	
	
	==> coredns [46eae5a7c6154615c0a652f87ce896bce4536c077459a12b87cc5dfc6be0d30f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40056 - 4851 "HINFO IN 4911277686334584114.634317722569930143. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019995881s
	
	
	==> coredns [f93558a9ee62c4da6dbc743766c097e163203f09f08b4e7cac573c2e727f8d2d] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47190 - 7219 "HINFO IN 6182279019942428577.1828839238233307805. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017029646s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-141069
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-141069
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=add7f35814b0dd6a5321a564d1b48a9e50f303ef
	                    minikube.k8s.io/name=functional-141069
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_19T19_00_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 19 Sep 2024 19:00:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-141069
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 19 Sep 2024 19:14:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 19 Sep 2024 19:10:50 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 19 Sep 2024 19:10:50 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 19 Sep 2024 19:10:50 +0000   Thu, 19 Sep 2024 19:00:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 19 Sep 2024 19:10:50 +0000   Thu, 19 Sep 2024 19:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-141069
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 95f6fe0044074469a9e63885e0bd760d
	  System UUID:                f91a398c-4f58-4006-a0a5-af2d4cd2e27b
	  Boot ID:                    e13586fb-8251-4108-a9ef-ca5be7772d16
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-grt5w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-node-connect-67bdd5bbb4-llvnt          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     mysql-6cdb49bbb-fwxgw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-jsgn7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-functional-141069                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-6vwt2                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-functional-141069             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-141069    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-s7zj9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-functional-141069             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-mhdfb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-h57zq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 14m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	  Normal   NodeReady                13m                kubelet          Node functional-141069 status is now: NodeReady
	  Warning  ContainerGCFailed        12m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           12m                node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-141069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-141069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node functional-141069 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-141069 event: Registered Node functional-141069 in Controller
	
	
	==> dmesg <==
	[  +1.030260] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +2.011865] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.003970] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000004] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +4.219718] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000005] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 18:17] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000009] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[  +0.000035] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-8bc94e1d825c
	[  +0.000006] ll header: 00000000: 02 42 c8 1f 3e 15 02 42 c0 a8 5e 02 08 00
	[Sep19 19:04] FS-Cache: Duplicate cookie detected
	[  +0.004814] FS-Cache: O-cookie c=00000036 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006740] FS-Cache: O-cookie d=000000003d32a7f7{9P.session} n=000000008d2c1d93
	[  +0.007518] FS-Cache: O-key=[10] '34323938333031383536'
	[  +0.005349] FS-Cache: N-cookie c=00000037 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006665] FS-Cache: N-cookie d=000000003d32a7f7{9P.session} n=000000005d7d76c3
	[  +0.008916] FS-Cache: N-key=[10] '34323938333031383536'
	
	
	==> etcd [849c4710abbe5afd380b7564b1d93d183fef259cdbbb6909ced396be250d7803] <==
	{"level":"info","ts":"2024-09-19T19:02:28.246855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-19T19:02:28.246891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-19T19:02:28.246909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246917Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.246954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:02:28.249820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:02:28.250072Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:02:28.250126Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T19:02:28.249837Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:02:28.249832Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-141069 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:02:28.251261Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:02:28.251346Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:02:28.252736Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-19T19:02:28.252731Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T19:02:52.145216Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-19T19:02:52.145292Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-141069","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-19T19:02:52.145397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.145501Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.165060Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-19T19:02:52.165111Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-19T19:02:52.165171Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-19T19:02:52.167418Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:02:52.167553Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:02:52.167573Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-141069","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c187be7729ad28b438ef6ecb381c8d161a3d30f2aa92fc2c41b62adc7654960e] <==
	{"level":"info","ts":"2024-09-19T19:03:09.053182Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-19T19:03:09.055882Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-19T19:03:09.056045Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:03:09.056143Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-19T19:03:09.056174Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-19T19:03:09.056246Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-19T19:03:10.744548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-19T19:03:10.744661Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.744694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-09-19T19:03:10.745879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:03:10.745896Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-19T19:03:10.745879Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-141069 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-19T19:03:10.746124Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-19T19:03:10.746176Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-19T19:03:10.746820Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:03:10.746935Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-19T19:03:10.747644Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-19T19:03:10.747805Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-19T19:13:10.762409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1191}
	{"level":"info","ts":"2024-09-19T19:13:10.782468Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1191,"took":"19.568031ms","hash":36126803,"current-db-size-bytes":4128768,"current-db-size":"4.1 MB","current-db-size-in-use-bytes":1884160,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2024-09-19T19:13:10.782510Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":36126803,"revision":1191,"compact-revision":-1}
	
	
	==> kernel <==
	 19:14:52 up  3:57,  0 users,  load average: 0.09, 0.17, 0.31
	Linux functional-141069 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [6f4fb1b664ea6519bcf463627f08bacfda441cdc5b4038a27fa434c3a065b4f3] <==
	W0919 19:02:17.845921       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:17.845967       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:19.383351       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:19.383401       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.179948       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.179991       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.322818       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.322872       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:20.990136       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:20.990183       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:23.805849       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:23.805896       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:24.259663       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:24.259698       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:25.146316       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:25.146368       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0919 19:02:25.485800       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0919 19:02:25.485853       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0919 19:02:35.857433       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0919 19:02:35.857469       1 metrics.go:61] Registering metrics
	I0919 19:02:35.857530       1 controller.go:374] Syncing nftables rules
	I0919 19:02:36.456640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:02:36.456679       1 main.go:299] handling current node
	I0919 19:02:46.457595       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:02:46.457662       1 main.go:299] handling current node
	
	
	==> kindnet [d26b6e8f948dd62d1c4b63c852c1fce8421d19aea8c6dd2d9322059bdbe43ad8] <==
	I0919 19:12:43.267439       1 main.go:299] handling current node
	I0919 19:12:53.266445       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:12:53.266497       1 main.go:299] handling current node
	I0919 19:13:03.267384       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:03.267419       1 main.go:299] handling current node
	I0919 19:13:13.264770       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:13.264817       1 main.go:299] handling current node
	I0919 19:13:23.271384       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:23.271423       1 main.go:299] handling current node
	I0919 19:13:33.273461       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:33.273499       1 main.go:299] handling current node
	I0919 19:13:43.267387       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:43.267424       1 main.go:299] handling current node
	I0919 19:13:53.271377       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:13:53.271415       1 main.go:299] handling current node
	I0919 19:14:03.267378       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:14:03.267420       1 main.go:299] handling current node
	I0919 19:14:13.264463       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:14:13.264506       1 main.go:299] handling current node
	I0919 19:14:23.273777       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:14:23.273819       1 main.go:299] handling current node
	I0919 19:14:33.267384       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:14:33.267421       1 main.go:299] handling current node
	I0919 19:14:43.266103       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0919 19:14:43.266134       1 main.go:299] handling current node
	
	
	==> kube-apiserver [34ec523c69e36d6dcdba816af0b779ec75bb98820fd4c1f8df436ef44a706f9d] <==
	I0919 19:03:11.836251       1 aggregator.go:171] initial CRD sync complete...
	I0919 19:03:11.836307       1 autoregister_controller.go:144] Starting autoregister controller
	I0919 19:03:11.836340       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 19:03:11.836372       1 cache.go:39] Caches are synced for autoregister controller
	I0919 19:03:11.836115       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0919 19:03:11.838969       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0919 19:03:11.841481       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0919 19:03:11.842699       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 19:03:12.678031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0919 19:03:13.695219       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0919 19:03:13.792397       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0919 19:03:13.803686       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0919 19:03:13.858621       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0919 19:03:13.864313       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0919 19:03:30.139874       1 controller.go:615] quota admission added evaluator for: endpoints
	I0919 19:03:33.076946       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.203.230"}
	I0919 19:03:33.085217       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 19:03:38.631182       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.36.227"}
	I0919 19:03:39.281379       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0919 19:03:39.368721       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.254.162"}
	I0919 19:03:39.511422       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.235.200"}
	I0919 19:04:24.243692       1 controller.go:615] quota admission added evaluator for: namespaces
	I0919 19:04:24.546595       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.165.61"}
	I0919 19:04:24.562577       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.101.114"}
	I0919 19:04:50.758816       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.102.176"}
	
	
	==> kube-controller-manager [f1ad386107b940be6b13e6128491e816dc896755be7b8d6fb397f22a83c02f70] <==
	I0919 19:04:24.459991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.810494ms"
	I0919 19:04:24.460077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="46.593µs"
	I0919 19:04:24.469193       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="72.128µs"
	I0919 19:04:43.914023       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:04:50.801180       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="9.638302ms"
	I0919 19:04:50.806343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="5.098734ms"
	I0919 19:04:50.806409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="37.98µs"
	I0919 19:04:50.808182       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="46.004µs"
	I0919 19:05:14.652366       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:05:17.715403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.950797ms"
	I0919 19:05:17.715489       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="48.377µs"
	I0919 19:05:21.710007       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.78228ms"
	I0919 19:05:21.710107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="53.558µs"
	I0919 19:05:45.132047       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:05:52.768197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="71.817µs"
	I0919 19:06:05.260824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="89.677µs"
	I0919 19:07:39.260669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="63.381µs"
	I0919 19:07:54.260751       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="71.64µs"
	I0919 19:09:16.262266       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="111.754µs"
	I0919 19:09:28.262087       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="136.094µs"
	I0919 19:10:38.262188       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="179.579µs"
	I0919 19:10:50.254926       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:10:53.262173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="63.715µs"
	I0919 19:12:39.260933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="137.931µs"
	I0919 19:12:50.260430       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="75.882µs"
	
	
	==> kube-controller-manager [ffb5bc44aeb8403cce7208898412b27c5d35140d368c10269958b1c41225ae70] <==
	I0919 19:02:32.415263       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0919 19:02:32.415296       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 19:02:32.415321       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0919 19:02:32.415337       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0919 19:02:32.415411       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-141069"
	I0919 19:02:32.470665       1 shared_informer.go:320] Caches are synced for ephemeral
	I0919 19:02:32.509648       1 shared_informer.go:320] Caches are synced for PV protection
	I0919 19:02:32.509667       1 shared_informer.go:320] Caches are synced for expand
	I0919 19:02:32.515473       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:02:32.516387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="107.232193ms"
	I0919 19:02:32.516554       1 shared_informer.go:320] Caches are synced for attach detach
	I0919 19:02:32.516591       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.79µs"
	I0919 19:02:32.558130       1 shared_informer.go:320] Caches are synced for persistent volume
	I0919 19:02:32.558142       1 shared_informer.go:320] Caches are synced for stateful set
	I0919 19:02:32.558159       1 shared_informer.go:320] Caches are synced for PVC protection
	I0919 19:02:32.573562       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0919 19:02:32.587661       1 shared_informer.go:320] Caches are synced for endpoint
	I0919 19:02:32.602159       1 shared_informer.go:320] Caches are synced for resource quota
	I0919 19:02:32.608588       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0919 19:02:32.658132       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0919 19:02:33.026666       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:02:33.058504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0919 19:02:33.058556       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0919 19:02:33.334378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.763778ms"
	I0919 19:02:33.334490       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="77.621µs"
	
	
	==> kube-proxy [d4c1e18d5b680fa01f6333215fdcca5347b8af219200b2f7164895791f0a4b74] <==
	I0919 19:02:18.037468       1 server_linux.go:66] "Using iptables proxy"
	E0919 19:02:18.153525       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:19.264315       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:21.459833       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0919 19:02:26.194081       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-141069\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0919 19:02:34.244162       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 19:02:34.244226       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:02:34.263792       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 19:02:34.263872       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:02:34.266023       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:02:34.266354       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:02:34.266580       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:02:34.268931       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:02:34.269002       1 config.go:328] "Starting node config controller"
	I0919 19:02:34.269010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:02:34.268976       1 config.go:199] "Starting service config controller"
	I0919 19:02:34.269141       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:02:34.269070       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:02:34.369254       1 shared_informer.go:320] Caches are synced for node config
	I0919 19:02:34.369278       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:02:34.369322       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [eccfad4b0b156e1fb323a54215b597cbc5427631771a874d360f125cf051f49a] <==
	I0919 19:03:12.774668       1 server_linux.go:66] "Using iptables proxy"
	I0919 19:03:12.970949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0919 19:03:12.971024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 19:03:13.053154       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 19:03:13.053213       1 server_linux.go:169] "Using iptables Proxier"
	I0919 19:03:13.055059       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 19:03:13.055452       1 server.go:483] "Version info" version="v1.31.1"
	I0919 19:03:13.055493       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:03:13.056538       1 config.go:105] "Starting endpoint slice config controller"
	I0919 19:03:13.056583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0919 19:03:13.056601       1 config.go:328] "Starting node config controller"
	I0919 19:03:13.056612       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0919 19:03:13.056631       1 config.go:199] "Starting service config controller"
	I0919 19:03:13.056641       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0919 19:03:13.157139       1 shared_informer.go:320] Caches are synced for service config
	I0919 19:03:13.157169       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0919 19:03:13.157220       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [006f94edf102da345d1fcff9937a4c2cba46aa40e9c589e33e9b3268ba828754] <==
	E0919 19:02:27.439854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.502033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.502091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.539964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.540010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.553825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.553870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.649979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.650031       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.683072       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.683113       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.707917       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.707952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.722709       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.722740       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:27.793060       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0919 19:02:27.793116       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0919 19:02:29.365360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0919 19:02:29.365420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0919 19:02:29.371384       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0919 19:02:29.371437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0919 19:02:29.373860       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0919 19:02:29.373888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0919 19:02:30.864375       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0919 19:02:52.146477       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [b6465a618542d82e167dfa84727410efc0fe50bc1fd4bcf4db74b60d855b1409] <==
	I0919 19:03:09.480520       1 serving.go:386] Generated self-signed cert in-memory
	W0919 19:03:11.736045       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 19:03:11.736093       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 19:03:11.736107       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 19:03:11.736115       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 19:03:11.756289       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0919 19:03:11.756316       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 19:03:11.758162       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 19:03:11.758211       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0919 19:03:11.758433       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0919 19:03:11.758595       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 19:03:11.858948       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 19 19:13:48 functional-141069 kubelet[5956]: E0919 19:13:48.468498    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773228468299048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:13:48 functional-141069 kubelet[5956]: E0919 19:13:48.468546    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773228468299048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:13:52 functional-141069 kubelet[5956]: E0919 19:13:52.253154    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:13:58 functional-141069 kubelet[5956]: E0919 19:13:58.469920    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773238469732762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:13:58 functional-141069 kubelet[5956]: E0919 19:13:58.469960    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773238469732762,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:00 functional-141069 kubelet[5956]: E0919 19:14:00.252797    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="a92b6612-8ecf-46c0-a205-78bd8970a8ff"
	Sep 19 19:14:05 functional-141069 kubelet[5956]: E0919 19:14:05.253459    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:14:08 functional-141069 kubelet[5956]: E0919 19:14:08.471246    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773248471059675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:08 functional-141069 kubelet[5956]: E0919 19:14:08.471282    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773248471059675,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:18 functional-141069 kubelet[5956]: E0919 19:14:18.472753    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773258472575063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:18 functional-141069 kubelet[5956]: E0919 19:14:18.472793    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773258472575063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:20 functional-141069 kubelet[5956]: E0919 19:14:20.252469    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:14:28 functional-141069 kubelet[5956]: E0919 19:14:28.474314    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773268474163657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:28 functional-141069 kubelet[5956]: E0919 19:14:28.474356    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773268474163657,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:29 functional-141069 kubelet[5956]: E0919 19:14:29.375101    5956 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 19 19:14:29 functional-141069 kubelet[5956]: E0919 19:14:29.375169    5956 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Sep 19 19:14:29 functional-141069 kubelet[5956]: E0919 19:14:29.375425    5956 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nginx,Image:docker.io/nginx:alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gglk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nginx-sv
c_default(d1ce905b-515b-4f60-aff6-1eeb2b5075af): ErrImagePull: loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 19 19:14:29 functional-141069 kubelet[5956]: E0919 19:14:29.376813    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="d1ce905b-515b-4f60-aff6-1eeb2b5075af"
	Sep 19 19:14:33 functional-141069 kubelet[5956]: E0919 19:14:33.252404    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:14:38 functional-141069 kubelet[5956]: E0919 19:14:38.476059    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773278475898693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:38 functional-141069 kubelet[5956]: E0919 19:14:38.476095    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773278475898693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:41 functional-141069 kubelet[5956]: E0919 19:14:41.253062    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="d1ce905b-515b-4f60-aff6-1eeb2b5075af"
	Sep 19 19:14:46 functional-141069 kubelet[5956]: E0919 19:14:46.253317    5956 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-fwxgw" podUID="6f666c87-a6c4-4e7e-803a-1dc8af345566"
	Sep 19 19:14:48 functional-141069 kubelet[5956]: E0919 19:14:48.477586    5956 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773288477395396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 19 19:14:48 functional-141069 kubelet[5956]: E0919 19:14:48.477629    5956 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726773288477395396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:225845,},InodesUsed:&UInt64Value{Value:116,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [a9db194c7417074f05eefd6d4880ffd3724012b383d3d9bbcbe68752ec751b14] <==
	2024/09/19 19:05:21 Starting overwatch
	2024/09/19 19:05:21 Using namespace: kubernetes-dashboard
	2024/09/19 19:05:21 Using in-cluster config to connect to apiserver
	2024/09/19 19:05:21 Using secret token for csrf signing
	2024/09/19 19:05:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/19 19:05:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/19 19:05:21 Successful initial request to the apiserver, version: v1.31.1
	2024/09/19 19:05:21 Generating JWE encryption key
	2024/09/19 19:05:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/19 19:05:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/19 19:05:21 Initializing JWE encryption key from synchronized object
	2024/09/19 19:05:21 Creating in-cluster Sidecar client
	2024/09/19 19:05:21 Successful request to sidecar
	2024/09/19 19:05:21 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [23dbcb40611a7af3ef6dd30657a99f22746cd91cefa5ae67952d1cabf5c9bd24] <==
	I0919 19:03:12.675768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 19:03:12.743442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 19:03:12.743514       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 19:03:30.143692       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 19:03:30.143808       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8!
	I0919 19:03:30.143802       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f500cfc-5004-4575-a4e8-04e5a0447dd7", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8 became leader
	I0919 19:03:30.244234       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-141069_7be0cb94-a205-4b68-9c94-8c3ec73f28b8!
	I0919 19:03:44.436841       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0919 19:03:44.437055       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78595a86-c89f-4050-9ad0-31fbf426503c", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0919 19:03:44.436924       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    9532a489-7d52-48c9-a8d3-d2bf17ba5926 382 0 2024-09-19 19:01:01 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-09-19 19:01:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-78595a86-c89f-4050-9ad0-31fbf426503c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  78595a86-c89f-4050-9ad0-31fbf426503c 784 0 2024-09-19 19:03:44 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-09-19 19:03:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-09-19 19:03:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0919 19:03:44.437394       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c" provisioned
	I0919 19:03:44.437419       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0919 19:03:44.437426       1 volume_store.go:212] Trying to save persistentvolume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c"
	I0919 19:03:44.444067       1 volume_store.go:219] persistentvolume "pvc-78595a86-c89f-4050-9ad0-31fbf426503c" saved
	I0919 19:03:44.445864       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"78595a86-c89f-4050-9ad0-31fbf426503c", APIVersion:"v1", ResourceVersion:"784", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-78595a86-c89f-4050-9ad0-31fbf426503c
	
	
	==> storage-provisioner [a6e87ed34ac16dc8dc24d090d91c2687d421929e0259ce16f1250b512f05abbd] <==
	I0919 19:02:34.010385       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 19:02:34.017669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 19:02:34.017714       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0919 19:02:51.414246       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 19:02:51.414310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9f500cfc-5004-4575-a4e8-04e5a0447dd7", APIVersion:"v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586 became leader
	I0919 19:02:51.414394       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586!
	I0919 19:02:51.515237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-141069_7dab80e0-0fd1-49d6-adcf-ddf3c49a0586!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-141069 -n functional-141069
helpers_test.go:261: (dbg) Run:  kubectl --context functional-141069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-141069 describe pod busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-141069 describe pod busybox-mount mysql-6cdb49bbb-fwxgw nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:04:23 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2474c26a79e6637990374cd6154b3b4203a4802ebc130ab9bbf8a3ab9eac5a93
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 19 Sep 2024 19:04:44 +0000
	      Finished:     Thu, 19 Sep 2024 19:04:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rfh8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rfh8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-141069
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.001s (20.903s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-fwxgw
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:04:50 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hp9q5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hp9q5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-6cdb49bbb-fwxgw to functional-141069
	  Warning  Failed     5m51s (x2 over 7m29s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    4m59s (x4 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m28s (x2 over 9m1s)   kubelet            Failed to pull image "docker.io/mysql:5.7": loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m28s (x4 over 9m1s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m (x7 over 9m1s)      kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m (x7 over 9m1s)      kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:03:38 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:  10.244.0.4
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gglk6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gglk6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  11m                    default-scheduler  Successfully assigned default/nginx-svc to functional-141069
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m21s (x4 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     6m21s (x3 over 9m38s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5m55s (x7 over 10m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m55s (x7 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    69s (x6 over 11m)      kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-141069/192.168.49.2
	Start Time:       Thu, 19 Sep 2024 19:03:44 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:  10.244.0.7
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-snblv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-snblv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/sp-pod to functional-141069
	  Warning  Failed     6m52s                kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m9s (x4 over 11m)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     5m20s (x3 over 10m)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m20s (x4 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     4m39s (x7 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    68s (x20 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-141069 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d1ce905b-515b-4f60-aff6-1eeb2b5075af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-141069 -n functional-141069
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-09-19 19:07:38.93100714 +0000 UTC m=+1728.833966966
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-141069 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-141069 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-141069/192.168.49.2
Start Time:       Thu, 19 Sep 2024 19:03:38 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:  10.244.0.4
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gglk6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gglk6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-141069
Warning  Failed     3m28s                kubelet            Failed to pull image "docker.io/nginx:alpine": determining manifest MIME type for docker://nginx:alpine: reading manifest sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    117s (x3 over 4m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     45s (x3 over 3m28s)  kubelet            Error: ErrImagePull
Warning  Failed     45s (x2 over 2m23s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x5 over 3m28s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     6s (x5 over 3m28s)   kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-141069 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-141069 logs nginx-svc -n default: exit status 1 (63.118973ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-141069 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (468.817751ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image load --daemon kicbase/echo-server:functional-141069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-141069" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image load --daemon kicbase/echo-server:functional-141069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-141069" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (460.207951ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image save kicbase/echo-server:functional-141069 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0919 19:05:28.101714  805386 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:05:28.101953  805386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:05:28.101961  805386 out.go:358] Setting ErrFile to fd 2...
	I0919 19:05:28.101965  805386 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:05:28.102140  805386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:05:28.102734  805386 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:05:28.102832  805386 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:05:28.103224  805386 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
	I0919 19:05:28.120903  805386 ssh_runner.go:195] Run: systemctl --version
	I0919 19:05:28.120953  805386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
	I0919 19:05:28.137858  805386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
	I0919 19:05:28.227861  805386 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W0919 19:05:28.227930  805386 cache_images.go:253] Failed to load cached images for "functional-141069": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I0919 19:05:28.227968  805386 cache_images.go:265] failed pushing to: functional-141069

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-141069
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-141069: exit status 1 (17.474648ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-141069

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-141069

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (100.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I0919 19:07:39.061107  760079 retry.go:31] will retry after 2.481795087s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:07:41.544077  760079 retry.go:31] will retry after 5.407149761s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:07:46.951997  760079 retry.go:31] will retry after 4.015687565s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:07:50.968831  760079 retry.go:31] will retry after 7.902378701s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:07:58.872401  760079 retry.go:31] will retry after 19.322431519s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:08:18.195975  760079 retry.go:31] will retry after 32.363154005s: Temporary Error: Get "http:": http: no Host in request URL
I0919 19:08:50.559338  760079 retry.go:31] will retry after 28.951559566s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-141069 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.100.36.227   10.100.36.227   80:30960/TCP   5m41s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (100.51s)

                                                
                                    

Test pass (287/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.51
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.56
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.09
21 TestBinaryMirror 0.76
22 TestOffline 59.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 173.75
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 31.85
37 TestAddons/parallel/HelmTiller 13.05
40 TestAddons/parallel/Headlamp 16.41
41 TestAddons/parallel/CloudSpanner 6.5
42 TestAddons/parallel/LocalPath 52.96
43 TestAddons/parallel/NvidiaDevicePlugin 5.46
44 TestAddons/parallel/Yakd 10.64
45 TestAddons/StoppedEnableDisable 12.09
46 TestCertOptions 28.64
47 TestCertExpiration 223.01
49 TestForceSystemdFlag 29.53
50 TestForceSystemdEnv 29.67
52 TestKVMDriverInstallOrUpdate 3.11
56 TestErrorSpam/setup 23.32
57 TestErrorSpam/start 0.57
58 TestErrorSpam/status 0.86
59 TestErrorSpam/pause 1.49
60 TestErrorSpam/unpause 1.69
61 TestErrorSpam/stop 1.35
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 68.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 58.57
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.06
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.05
73 TestFunctional/serial/CacheCmd/cache/add_local 1.32
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 39.19
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.34
84 TestFunctional/serial/LogsFileCmd 1.36
85 TestFunctional/serial/InvalidService 4.54
87 TestFunctional/parallel/ConfigCmd 0.38
88 TestFunctional/parallel/DashboardCmd 61.59
89 TestFunctional/parallel/DryRun 0.35
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 0.88
95 TestFunctional/parallel/ServiceCmdConnect 41.49
96 TestFunctional/parallel/AddonsCmd 0.14
99 TestFunctional/parallel/SSHCmd 0.61
100 TestFunctional/parallel/CpCmd 1.76
102 TestFunctional/parallel/FileSync 0.24
103 TestFunctional/parallel/CertSync 1.76
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
111 TestFunctional/parallel/License 0.19
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.57
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/ServiceCmd/DeployApp 40.16
118 TestFunctional/parallel/ServiceCmd/List 0.48
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
122 TestFunctional/parallel/ServiceCmd/Format 0.35
123 TestFunctional/parallel/ProfileCmd/profile_list 0.37
124 TestFunctional/parallel/ServiceCmd/URL 0.35
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/MountCmd/any-port 25.67
127 TestFunctional/parallel/MountCmd/specific-port 1.54
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.67
129 TestFunctional/parallel/Version/short 0.05
130 TestFunctional/parallel/Version/components 0.46
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
135 TestFunctional/parallel/ImageCommands/ImageBuild 1.94
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.03
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 150.55
159 TestMultiControlPlane/serial/DeployApp 4.06
160 TestMultiControlPlane/serial/PingHostFromPods 1.04
161 TestMultiControlPlane/serial/AddWorkerNode 30.3
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
164 TestMultiControlPlane/serial/CopyFile 15.89
165 TestMultiControlPlane/serial/StopSecondaryNode 12.48
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
167 TestMultiControlPlane/serial/RestartSecondaryNode 19.92
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 273.34
170 TestMultiControlPlane/serial/DeleteSecondaryNode 12.11
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/StopCluster 35.54
173 TestMultiControlPlane/serial/RestartCluster 67.05
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
175 TestMultiControlPlane/serial/AddSecondaryNode 66.87
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
180 TestJSONOutput/start/Command 67.53
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.68
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.58
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.73
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 31.07
206 TestKicCustomNetwork/use_default_bridge_network 23.84
207 TestKicExistingNetwork 25.48
208 TestKicCustomSubnet 26.94
209 TestKicStaticIP 23.47
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 51.53
214 TestMountStart/serial/StartWithMountFirst 5.6
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 8.29
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.2
222 TestMountStart/serial/VerifyMountPostStop 0.24
225 TestMultiNode/serial/FreshStart2Nodes 72.28
226 TestMultiNode/serial/DeployApp2Nodes 3.56
227 TestMultiNode/serial/PingHostFrom2Pods 0.7
228 TestMultiNode/serial/AddNode 55.64
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.63
231 TestMultiNode/serial/CopyFile 9.04
232 TestMultiNode/serial/StopNode 2.09
233 TestMultiNode/serial/StartAfterStop 9
234 TestMultiNode/serial/RestartKeepsNodes 101.94
235 TestMultiNode/serial/DeleteNode 5.22
236 TestMultiNode/serial/StopMultiNode 23.69
237 TestMultiNode/serial/RestartMultiNode 46.91
238 TestMultiNode/serial/ValidateNameConflict 21.94
243 TestPreload 105.29
245 TestScheduledStopUnix 97.41
248 TestInsufficientStorage 12.32
249 TestRunningBinaryUpgrade 62.16
251 TestKubernetesUpgrade 360.56
252 TestMissingContainerUpgrade 128.94
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 41.48
256 TestNoKubernetes/serial/StartWithStopK8s 21.92
257 TestStoppedBinaryUpgrade/Setup 0.49
258 TestStoppedBinaryUpgrade/Upgrade 70.62
259 TestNoKubernetes/serial/Start 11.87
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
261 TestNoKubernetes/serial/ProfileList 35.36
262 TestNoKubernetes/serial/Stop 1.23
263 TestNoKubernetes/serial/StartNoArgs 7.17
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
273 TestPause/serial/Start 41.73
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.04
282 TestNetworkPlugins/group/false 3.19
283 TestPause/serial/SecondStartNoReconfiguration 21.38
287 TestPause/serial/Pause 0.68
288 TestPause/serial/VerifyStatus 0.31
289 TestPause/serial/Unpause 0.7
290 TestPause/serial/PauseAgain 0.83
291 TestPause/serial/DeletePaused 2.78
292 TestPause/serial/VerifyDeletedResources 3.31
294 TestStartStop/group/old-k8s-version/serial/FirstStart 133.85
296 TestStartStop/group/no-preload/serial/FirstStart 51.47
297 TestStartStop/group/no-preload/serial/DeployApp 8.22
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
299 TestStartStop/group/no-preload/serial/Stop 11.89
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
301 TestStartStop/group/no-preload/serial/SecondStart 262.08
302 TestStartStop/group/old-k8s-version/serial/DeployApp 7.37
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
304 TestStartStop/group/old-k8s-version/serial/Stop 11.94
306 TestStartStop/group/embed-certs/serial/FirstStart 70.32
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
308 TestStartStop/group/old-k8s-version/serial/SecondStart 127.2
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.37
311 TestStartStop/group/embed-certs/serial/DeployApp 7.24
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
313 TestStartStop/group/embed-certs/serial/Stop 11.86
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
315 TestStartStop/group/embed-certs/serial/SecondStart 262.64
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.88
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.22
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/old-k8s-version/serial/Pause 2.73
326 TestStartStop/group/newest-cni/serial/FirstStart 28.11
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.75
329 TestStartStop/group/newest-cni/serial/Stop 1.2
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
331 TestStartStop/group/newest-cni/serial/SecondStart 14.49
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/newest-cni/serial/Pause 2.64
336 TestNetworkPlugins/group/auto/Start 40.23
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
340 TestStartStop/group/no-preload/serial/Pause 2.81
341 TestNetworkPlugins/group/flannel/Start 50.51
342 TestNetworkPlugins/group/auto/KubeletFlags 0.29
343 TestNetworkPlugins/group/auto/NetCatPod 10.18
344 TestNetworkPlugins/group/auto/DNS 0.14
345 TestNetworkPlugins/group/auto/Localhost 0.12
346 TestNetworkPlugins/group/auto/HairPin 0.11
347 TestNetworkPlugins/group/enable-default-cni/Start 67.94
348 TestNetworkPlugins/group/flannel/ControllerPod 6.01
349 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
350 TestNetworkPlugins/group/flannel/NetCatPod 10.18
351 TestNetworkPlugins/group/flannel/DNS 0.14
352 TestNetworkPlugins/group/flannel/Localhost 0.12
353 TestNetworkPlugins/group/flannel/HairPin 0.11
354 TestNetworkPlugins/group/bridge/Start 37.01
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.18
357 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
358 TestNetworkPlugins/group/bridge/NetCatPod 9.18
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
362 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
363 TestNetworkPlugins/group/bridge/DNS 21.61
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/embed-certs/serial/Pause 2.89
367 TestNetworkPlugins/group/calico/Start 54.55
368 TestNetworkPlugins/group/kindnet/Start 40.47
369 TestNetworkPlugins/group/bridge/Localhost 0.12
370 TestNetworkPlugins/group/bridge/HairPin 0.11
371 TestNetworkPlugins/group/custom-flannel/Start 46.54
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.02
376 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
377 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
378 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
379 TestNetworkPlugins/group/calico/ControllerPod 6.01
380 TestNetworkPlugins/group/kindnet/DNS 0.15
381 TestNetworkPlugins/group/kindnet/Localhost 0.12
382 TestNetworkPlugins/group/kindnet/HairPin 0.12
383 TestNetworkPlugins/group/calico/KubeletFlags 0.26
384 TestNetworkPlugins/group/calico/NetCatPod 9.17
385 TestNetworkPlugins/group/calico/DNS 0.13
386 TestNetworkPlugins/group/calico/Localhost 0.11
387 TestNetworkPlugins/group/calico/HairPin 0.11
388 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
389 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.19
390 TestNetworkPlugins/group/custom-flannel/DNS 0.12
391 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
392 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (4.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-845536 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-845536 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.510230474s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0919 18:38:54.645578  760079 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0919 18:38:54.645689  760079 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-845536
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-845536: exit status 85 (61.159411ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-845536 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |          |
	|         | -p download-only-845536        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:50.175802  760092 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:50.175949  760092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:50.175959  760092 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:50.175964  760092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:50.176159  760092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	W0919 18:38:50.176282  760092 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19664-753213/.minikube/config/config.json: open /home/jenkins/minikube-integration/19664-753213/.minikube/config/config.json: no such file or directory
	I0919 18:38:50.176907  760092 out.go:352] Setting JSON to true
	I0919 18:38:50.177978  760092 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12080,"bootTime":1726759050,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:38:50.178087  760092 start.go:139] virtualization: kvm guest
	I0919 18:38:50.180543  760092 out.go:97] [download-only-845536] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0919 18:38:50.180682  760092 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 18:38:50.180724  760092 notify.go:220] Checking for updates...
	I0919 18:38:50.182039  760092 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:38:50.183511  760092 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:50.185103  760092 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:38:50.186615  760092 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:38:50.188011  760092 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:38:50.190444  760092 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:38:50.190762  760092 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:50.215759  760092 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:50.215872  760092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:50.263444  760092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:38:50.253739087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:50.263569  760092 docker.go:318] overlay module found
	I0919 18:38:50.265254  760092 out.go:97] Using the docker driver based on user configuration
	I0919 18:38:50.265275  760092 start.go:297] selected driver: docker
	I0919 18:38:50.265280  760092 start.go:901] validating driver "docker" against <nil>
	I0919 18:38:50.265365  760092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:50.312343  760092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 18:38:50.302979102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:50.312518  760092 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:38:50.313154  760092 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0919 18:38:50.313310  760092 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:38:50.315355  760092 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-845536 host does not exist
	  To start a cluster, run: "minikube start -p download-only-845536"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-845536
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-759185 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-759185 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.562572247s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0919 18:39:00.601737  760079 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0919 18:39:00.601783  760079 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-759185
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-759185: exit status 85 (61.123757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-845536 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-845536        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| delete  | -p download-only-845536        | download-only-845536 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC | 19 Sep 24 18:38 UTC |
	| start   | -o=json --download-only        | download-only-759185 | jenkins | v1.34.0 | 19 Sep 24 18:38 UTC |                     |
	|         | -p download-only-759185        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/19 18:38:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 18:38:55.079250  760435 out.go:345] Setting OutFile to fd 1 ...
	I0919 18:38:55.079550  760435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:55.079562  760435 out.go:358] Setting ErrFile to fd 2...
	I0919 18:38:55.079566  760435 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 18:38:55.079741  760435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 18:38:55.080300  760435 out.go:352] Setting JSON to true
	I0919 18:38:55.081213  760435 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":12085,"bootTime":1726759050,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 18:38:55.081322  760435 start.go:139] virtualization: kvm guest
	I0919 18:38:55.083361  760435 out.go:97] [download-only-759185] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 18:38:55.083533  760435 notify.go:220] Checking for updates...
	I0919 18:38:55.084976  760435 out.go:169] MINIKUBE_LOCATION=19664
	I0919 18:38:55.086481  760435 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 18:38:55.087888  760435 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 18:38:55.089075  760435 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 18:38:55.090329  760435 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 18:38:55.092728  760435 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 18:38:55.092935  760435 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 18:38:55.115571  760435 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 18:38:55.115663  760435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:55.163236  760435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-19 18:38:55.154142342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:55.163360  760435 docker.go:318] overlay module found
	I0919 18:38:55.165372  760435 out.go:97] Using the docker driver based on user configuration
	I0919 18:38:55.165407  760435 start.go:297] selected driver: docker
	I0919 18:38:55.165418  760435 start.go:901] validating driver "docker" against <nil>
	I0919 18:38:55.165533  760435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 18:38:55.213077  760435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-19 18:38:55.203104248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 18:38:55.213242  760435 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0919 18:38:55.213766  760435 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0919 18:38:55.213926  760435 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 18:38:55.215740  760435 out.go:169] Using Docker driver with root privileges
	I0919 18:38:55.217123  760435 cni.go:84] Creating CNI manager for ""
	I0919 18:38:55.217181  760435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 18:38:55.217192  760435 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 18:38:55.217271  760435 start.go:340] cluster config:
	{Name:download-only-759185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-759185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 18:38:55.218704  760435 out.go:97] Starting "download-only-759185" primary control-plane node in "download-only-759185" cluster
	I0919 18:38:55.218720  760435 cache.go:121] Beginning downloading kic base image for docker with crio
	I0919 18:38:55.220108  760435 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0919 18:38:55.220132  760435 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:38:55.220192  760435 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0919 18:38:55.235903  760435 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0919 18:38:55.236036  760435 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0919 18:38:55.236053  760435 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0919 18:38:55.236057  760435 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0919 18:38:55.236064  760435 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0919 18:38:55.249621  760435 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:38:55.249676  760435 cache.go:56] Caching tarball of preloaded images
	I0919 18:38:55.249856  760435 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:38:55.251948  760435 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0919 18:38:55.251968  760435 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:38:55.283794  760435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0919 18:38:59.171985  760435 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:38:59.172093  760435 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19664-753213/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0919 18:38:59.909079  760435 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0919 18:38:59.909504  760435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/download-only-759185/config.json ...
	I0919 18:38:59.909542  760435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/download-only-759185/config.json: {Name:mk639b62e4d3e87a8bd4ce49bc087b4abda13644 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 18:38:59.909761  760435 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0919 18:38:59.909959  760435 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19664-753213/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-759185 host does not exist
	  To start a cluster, run: "minikube start -p download-only-759185"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-759185
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-985684 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-985684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-985684
--- PASS: TestDownloadOnlyKic (1.09s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 18:39:02.352052  760079 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-515604 --alsologtostderr --binary-mirror http://127.0.0.1:32895 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-515604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-515604
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (59.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-049998 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-049998 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (57.585166727s)
helpers_test.go:175: Cleaning up "offline-crio-049998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-049998
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-049998: (2.379535587s)
--- PASS: TestOffline (59.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685250
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-685250: exit status 85 (52.068309ms)

                                                
                                                
-- stdout --
	* Profile "addons-685250" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685250"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685250
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-685250: exit status 85 (50.667764ms)

                                                
                                                
-- stdout --
	* Profile "addons-685250" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-685250"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (173.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-685250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-685250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m53.746052431s)
--- PASS: TestAddons/Setup (173.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-685250 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-685250 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (31.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5nngx" [4ea7fd8d-f192-432a-b8d3-72e36416229e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004640173s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-685250
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-685250: (26.845907208s)
--- PASS: TestAddons/parallel/InspektorGadget (31.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.195885ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-64k5s" [bedc3304-f3bb-4c40-bb2c-bec621a3645c] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003986167s
addons_test.go:475: (dbg) Run:  kubectl --context addons-685250 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-685250 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.535865329s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-685250 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-ttv2g" [c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-ttv2g" [c9e75ebd-ac0a-4be7-a148-aeeb2d8dfb92] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004207481s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable headlamp --alsologtostderr -v=1
2024/09/19 18:51:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 addons disable headlamp --alsologtostderr -v=1: (5.663765648s)
--- PASS: TestAddons/parallel/Headlamp (16.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-5pf26" [3261b626-1f4a-43cd-b33f-452a1937a9e8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004318031s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-685250
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-685250 apply -f testdata/storage-provisioner-rancher/pvc.yaml
I0919 18:49:59.785802  760079 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:988: (dbg) Run:  kubectl --context addons-685250 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0ac5fe63-73ad-4f1a-a3e5-fa993131966e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0ac5fe63-73ad-4f1a-a3e5-fa993131966e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0ac5fe63-73ad-4f1a-a3e5-fa993131966e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003504679s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-685250 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 ssh "cat /opt/local-path-provisioner/pvc-83c31ed0-fc42-4249-94b0-a7e77464cc71_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-685250 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-685250 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.110771047s)
--- PASS: TestAddons/parallel/LocalPath (52.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lnffq" [b2573f29-e8a6-4fc7-9a19-a01fb32e67f2] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004634803s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-685250
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.46s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-d475t" [4ffacbe1-9852-4577-bcc1-5dc60e2480e7] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003718515s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-685250 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-685250 addons disable yakd --alsologtostderr -v=1: (5.631062356s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-685250
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-685250: (11.845374605s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-685250
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-685250
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-685250
--- PASS: TestAddons/StoppedEnableDisable (12.09s)

                                                
                                    
x
+
TestCertOptions (28.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-756851 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-756851 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.861942041s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-756851 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-756851 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-756851 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-756851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-756851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-756851: (5.015206025s)
--- PASS: TestCertOptions (28.64s)

                                                
                                    
x
+
TestCertExpiration (223.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-073840 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-073840 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.918886438s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-073840 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-073840 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.82513981s)
helpers_test.go:175: Cleaning up "cert-expiration-073840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-073840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-073840: (2.265673601s)
--- PASS: TestCertExpiration (223.01s)

                                                
                                    
x
+
TestForceSystemdFlag (29.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-190351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-190351 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.875179431s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-190351 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-190351" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-190351
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-190351: (2.343935281s)
--- PASS: TestForceSystemdFlag (29.53s)

                                                
                                    
x
+
TestForceSystemdEnv (29.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-592366 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-592366 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.009032315s)
helpers_test.go:175: Cleaning up "force-systemd-env-592366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-592366
E0919 19:43:38.264266  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-592366: (6.656498711s)
--- PASS: TestForceSystemdEnv (29.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 19:43:50.999773  760079 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 19:43:50.999894  760079 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0919 19:43:51.030348  760079 install.go:62] docker-machine-driver-kvm2: exit status 1
W0919 19:43:51.030793  760079 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 19:43:51.030848  760079 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1052039108/001/docker-machine-driver-kvm2
I0919 19:43:51.304680  760079 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1052039108/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000885980 gz:0xc000885988 tar:0xc0008858f0 tar.bz2:0xc000885900 tar.gz:0xc000885920 tar.xz:0xc000885950 tar.zst:0xc000885970 tbz2:0xc000885900 tgz:0xc000885920 txz:0xc000885950 tzst:0xc000885970 xz:0xc000885990 zip:0xc0008859a0 zst:0xc000885998] Getters:map[file:0xc00128eac0 http:0xc0008b6b40 https:0xc0008b6b90] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 19:43:51.304735  760079 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1052039108/001/docker-machine-driver-kvm2
I0919 19:43:52.655495  760079 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 19:43:52.655590  760079 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 19:43:52.686120  760079 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0919 19:43:52.686154  760079 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0919 19:43:52.686219  760079 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 19:43:52.686247  760079 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1052039108/002/docker-machine-driver-kvm2
I0919 19:43:52.847103  760079 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1052039108/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640 0x4668640] Decompressors:map[bz2:0xc000885980 gz:0xc000885988 tar:0xc0008858f0 tar.bz2:0xc000885900 tar.gz:0xc000885920 tar.xz:0xc000885950 tar.zst:0xc000885970 tbz2:0xc000885900 tgz:0xc000885920 txz:0xc000885950 tzst:0xc000885970 xz:0xc000885990 zip:0xc0008859a0 zst:0xc000885998] Getters:map[file:0xc0012c48f0 http:0xc0004cb540 https:0xc0004cb680] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0919 19:43:52.847156  760079 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1052039108/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.11s)

                                                
                                    
x
+
TestErrorSpam/setup (23.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-812332 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812332 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-812332 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812332 --driver=docker  --container-runtime=crio: (23.320873659s)
--- PASS: TestErrorSpam/setup (23.32s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 stop: (1.169948408s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-812332 --log_dir /tmp/nospam-812332 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19664-753213/.minikube/files/etc/test/nested/copy/760079/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-141069 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.794517679s)
--- PASS: TestFunctional/serial/StartWithProxy (68.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (58.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 19:01:45.612106  760079 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --alsologtostderr -v=8
E0919 19:01:57.208400  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.214839  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.226268  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.247775  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.289239  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.370773  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.532369  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:57.854075  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:58.496160  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:01:59.777845  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:02.340891  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:07.463171  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:17.704815  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:02:38.187045  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-141069 --alsologtostderr -v=8: (58.568151957s)
functional_test.go:663: soft start took 58.56894872s for "functional-141069" cluster.
I0919 19:02:44.180669  760079 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (58.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-141069 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 cache add registry.k8s.io/pause:3.3: (1.082363045s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 cache add registry.k8s.io/pause:latest: (1.004906907s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-141069 /tmp/TestFunctionalserialCacheCmdcacheadd_local2811737236/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache add minikube-local-cache-test:functional-141069
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 cache add minikube-local-cache-test:functional-141069: (1.001396556s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache delete minikube-local-cache-test:functional-141069
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-141069
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (268.024978ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 kubectl -- --context functional-141069 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-141069 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 19:03:19.149548  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-141069 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.18505092s)
functional_test.go:761: restart took 39.185175211s for "functional-141069" cluster.
I0919 19:03:30.145443  760079 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-141069 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 logs: (1.34404271s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 logs --file /tmp/TestFunctionalserialLogsFileCmd1911868195/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 logs --file /tmp/TestFunctionalserialLogsFileCmd1911868195/001/logs.txt: (1.360862211s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-141069 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-141069
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-141069: exit status 115 (323.310456ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31576 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-141069 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-141069 delete -f testdata/invalidsvc.yaml: (1.046731413s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 config get cpus: exit status 14 (58.576148ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 config get cpus: exit status 14 (63.708551ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (61.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-141069 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-141069 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 802449: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (61.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-141069 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (148.868015ms)

                                                
                                                
-- stdout --
	* [functional-141069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:04:22.969004  802009 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:04:22.969106  802009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:22.969114  802009 out.go:358] Setting ErrFile to fd 2...
	I0919 19:04:22.969118  802009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:22.969308  802009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:04:22.969871  802009 out.go:352] Setting JSON to false
	I0919 19:04:22.971041  802009 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13613,"bootTime":1726759050,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:04:22.971166  802009 start.go:139] virtualization: kvm guest
	I0919 19:04:22.973452  802009 out.go:177] * [functional-141069] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:04:22.975124  802009 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:04:22.975191  802009 notify.go:220] Checking for updates...
	I0919 19:04:22.977770  802009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:04:22.979013  802009 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 19:04:22.980266  802009 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 19:04:22.981492  802009 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:04:22.982873  802009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:04:22.984696  802009 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:04:22.985276  802009 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:04:23.008756  802009 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:04:23.008860  802009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:23.058599  802009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:23.048459415 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:23.058707  802009 docker.go:318] overlay module found
	I0919 19:04:23.060660  802009 out.go:177] * Using the docker driver based on existing profile
	I0919 19:04:23.061972  802009 start.go:297] selected driver: docker
	I0919 19:04:23.061986  802009 start.go:901] validating driver "docker" against &{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:23.062076  802009 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:04:23.064062  802009 out.go:201] 
	W0919 19:04:23.065290  802009 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 19:04:23.066323  802009 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-141069 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-141069 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (155.180635ms)

                                                
                                                
-- stdout --
	* [functional-141069] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:04:22.813669  801877 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:04:22.813804  801877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:22.813813  801877 out.go:358] Setting ErrFile to fd 2...
	I0919 19:04:22.813817  801877 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:04:22.814147  801877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:04:22.815001  801877 out.go:352] Setting JSON to false
	I0919 19:04:22.816201  801877 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":13613,"bootTime":1726759050,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:04:22.816316  801877 start.go:139] virtualization: kvm guest
	I0919 19:04:22.818002  801877 out.go:177] * [functional-141069] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0919 19:04:22.819617  801877 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:04:22.819615  801877 notify.go:220] Checking for updates...
	I0919 19:04:22.820994  801877 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:04:22.822220  801877 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 19:04:22.823532  801877 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 19:04:22.824847  801877 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:04:22.826064  801877 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:04:22.827803  801877 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:04:22.828355  801877 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:04:22.853684  801877 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:04:22.853775  801877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:04:22.908472  801877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-19 19:04:22.896872844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:04:22.908649  801877 docker.go:318] overlay module found
	I0919 19:04:22.910637  801877 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0919 19:04:22.911935  801877 start.go:297] selected driver: docker
	I0919 19:04:22.911957  801877 start.go:901] validating driver "docker" against &{Name:functional-141069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-141069 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 19:04:22.912097  801877 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:04:22.914269  801877 out.go:201] 
	W0919 19:04:22.915641  801877 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 19:04:22.916953  801877 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-141069 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-141069 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-llvnt" [f0d85b46-13d8-4eba-9a2c-c1dc0c6a5d3f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-llvnt" [f0d85b46-13d8-4eba-9a2c-c1dc0c6a5d3f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 41.003779202s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30123
functional_test.go:1675: http://192.168.49.2:30123: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-llvnt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30123
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (41.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh -n functional-141069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cp functional-141069:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1651110286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh -n functional-141069 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh -n functional-141069 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/760079/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /etc/test/nested/copy/760079/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/760079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /etc/ssl/certs/760079.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/760079.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /usr/share/ca-certificates/760079.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7600792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /etc/ssl/certs/7600792.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7600792.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /usr/share/ca-certificates/7600792.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-141069 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "sudo systemctl is-active docker": exit status 1 (246.139499ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "sudo systemctl is-active containerd": exit status 1 (240.99975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 798768: os: process already finished
helpers_test.go:508: unable to kill pid 798317: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (40.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-141069 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-141069 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-grt5w" [b5ddea47-81b0-47cc-8a08-43c475f3e1cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-grt5w" [b5ddea47-81b0-47cc-8a08-43c475f3e1cf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 40.003655064s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (40.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service list -o json
functional_test.go:1494: Took "478.696874ms" to run "out/minikube-linux-amd64 -p functional-141069 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31898
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "317.480078ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.480696ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31898
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "352.784593ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.457499ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdany-port4105875373/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726772661517007183" to /tmp/TestFunctionalparallelMountCmdany-port4105875373/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726772661517007183" to /tmp/TestFunctionalparallelMountCmdany-port4105875373/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726772661517007183" to /tmp/TestFunctionalparallelMountCmdany-port4105875373/001/test-1726772661517007183
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.828551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:04:21.791221  760079 retry.go:31] will retry after 503.954167ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 19:04 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 19:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 19:04 test-1726772661517007183
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh cat /mount-9p/test-1726772661517007183
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-141069 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8ec70f92-42f3-4060-a826-25fa627742d4] Pending
helpers_test.go:344: "busybox-mount" [8ec70f92-42f3-4060-a826-25fa627742d4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0919 19:04:41.071660  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [8ec70f92-42f3-4060-a826-25fa627742d4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8ec70f92-42f3-4060-a826-25fa627742d4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.003957563s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-141069 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdany-port4105875373/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdspecific-port3577148630/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (256.891618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:04:47.441932  760079 retry.go:31] will retry after 340.725896ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdspecific-port3577148630/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "sudo umount -f /mount-9p": exit status 1 (250.232951ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-141069 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdspecific-port3577148630/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T" /mount1: exit status 1 (302.246554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 19:04:49.032670  760079 retry.go:31] will retry after 594.977929ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-141069 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-141069 /tmp/TestFunctionalparallelMountCmdVerifyCleanup202461660/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141069 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-141069
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141069 image ls --format short --alsologtostderr:
I0919 19:05:28.866940  805642 out.go:345] Setting OutFile to fd 1 ...
I0919 19:05:28.867083  805642 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:28.867096  805642 out.go:358] Setting ErrFile to fd 2...
I0919 19:05:28.867103  805642 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:28.867288  805642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
I0919 19:05:28.867975  805642 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:28.868075  805642 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:28.868473  805642 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
I0919 19:05:28.885450  805642 ssh_runner.go:195] Run: systemctl --version
I0919 19:05:28.885499  805642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
I0919 19:05:28.902747  805642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
I0919 19:05:28.991685  805642 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141069 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-141069  | e21d15d0a9e14 | 3.33kB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-141069  | aede1da68984d | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141069 image ls --format table --alsologtostderr:
I0919 19:05:31.434692  806223 out.go:345] Setting OutFile to fd 1 ...
I0919 19:05:31.434795  806223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:31.434802  806223 out.go:358] Setting ErrFile to fd 2...
I0919 19:05:31.434807  806223 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:31.435002  806223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
I0919 19:05:31.435685  806223 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:31.435784  806223 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:31.436158  806223 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
I0919 19:05:31.453169  806223 ssh_runner.go:195] Run: systemctl --version
I0919 19:05:31.453230  806223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
I0919 19:05:31.470598  806223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
I0919 19:05:31.559637  806223 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141069 image ls --format json --alsologtostderr:
[{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags
":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["regis
try.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-c
ontroller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"83ca03a4b72cbbdede1664641c86eb5d61a0a8ed865d82cf8efd245862833f40","repoDigests":["docker.io/library/370638cb18a3d7240f0db2033bbfa44c77120cca83ca6cb42a109fed0e80e944-tmp@sha256:15c64996a9926c29b72f42ed6954fe550747906d44029824ba9d50d1767d648c"],"repoTags":[],"size":"1465612"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["regis
try.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"e21d15d0a9e14167c38882cd3ef1c7b2ec8eff44bcdb10531c8d8f55e196dfc9","repoDigests":["localhost/minikube-local-cache-test@sha256:e61c7465aaba1097d966aeb6c68e26230ac4e85bb5d730a2166a7cbad7dc56c3"],"repoTags":["localhost/minikube-local-cache-test:functional-141069"],"size":"3330"},{"id":"aede1da68984d4b85507f16e15534ac4c1ba0673b1ab1f2d66f37435c9197050","repoDigests":["localhost/my-image@sha256:e6449b99560510f6354a00dcf2a98c0b071abdd8764b572eaf8196954290a786"],"repoTags":["localhost/my-image:functional-141069"],"size":"1468194"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23
dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9
b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141069 image ls --format json --alsologtostderr:
I0919 19:05:31.224364  806175 out.go:345] Setting OutFile to fd 1 ...
I0919 19:05:31.224660  806175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:31.224672  806175 out.go:358] Setting ErrFile to fd 2...
I0919 19:05:31.224677  806175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:31.224880  806175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
I0919 19:05:31.225491  806175 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:31.225596  806175 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:31.225960  806175 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
I0919 19:05:31.243244  806175 ssh_runner.go:195] Run: systemctl --version
I0919 19:05:31.243324  806175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
I0919 19:05:31.260558  806175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
I0919 19:05:31.351935  806175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141069 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e21d15d0a9e14167c38882cd3ef1c7b2ec8eff44bcdb10531c8d8f55e196dfc9
repoDigests:
- localhost/minikube-local-cache-test@sha256:e61c7465aaba1097d966aeb6c68e26230ac4e85bb5d730a2166a7cbad7dc56c3
repoTags:
- localhost/minikube-local-cache-test:functional-141069
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141069 image ls --format yaml --alsologtostderr:
I0919 19:05:29.075333  805693 out.go:345] Setting OutFile to fd 1 ...
I0919 19:05:29.075484  805693 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:29.075493  805693 out.go:358] Setting ErrFile to fd 2...
I0919 19:05:29.075498  805693 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:29.075673  805693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
I0919 19:05:29.076266  805693 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:29.076376  805693 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:29.076741  805693 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
I0919 19:05:29.093634  805693 ssh_runner.go:195] Run: systemctl --version
I0919 19:05:29.093678  805693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
I0919 19:05:29.110342  805693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
I0919 19:05:29.199691  805693 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-141069 ssh pgrep buildkitd: exit status 1 (244.285716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image build -t localhost/my-image:functional-141069 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-141069 image build -t localhost/my-image:functional-141069 testdata/build --alsologtostderr: (1.484891287s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-141069 image build -t localhost/my-image:functional-141069 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 83ca03a4b72
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-141069
--> aede1da6898
Successfully tagged localhost/my-image:functional-141069
aede1da68984d4b85507f16e15534ac4c1ba0673b1ab1f2d66f37435c9197050
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-141069 image build -t localhost/my-image:functional-141069 testdata/build --alsologtostderr:
I0919 19:05:29.526714  805851 out.go:345] Setting OutFile to fd 1 ...
I0919 19:05:29.526851  805851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:29.526860  805851 out.go:358] Setting ErrFile to fd 2...
I0919 19:05:29.526864  805851 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0919 19:05:29.527042  805851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
I0919 19:05:29.527657  805851 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:29.528174  805851 config.go:182] Loaded profile config "functional-141069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0919 19:05:29.528563  805851 cli_runner.go:164] Run: docker container inspect functional-141069 --format={{.State.Status}}
I0919 19:05:29.545808  805851 ssh_runner.go:195] Run: systemctl --version
I0919 19:05:29.545855  805851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-141069
I0919 19:05:29.562211  805851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33528 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/functional-141069/id_rsa Username:docker}
I0919 19:05:29.651751  805851 build_images.go:161] Building image from path: /tmp/build.201686263.tar
I0919 19:05:29.651810  805851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 19:05:29.660019  805851 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.201686263.tar
I0919 19:05:29.663015  805851 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.201686263.tar: stat -c "%s %y" /var/lib/minikube/build/build.201686263.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.201686263.tar': No such file or directory
I0919 19:05:29.663043  805851 ssh_runner.go:362] scp /tmp/build.201686263.tar --> /var/lib/minikube/build/build.201686263.tar (3072 bytes)
I0919 19:05:29.684328  805851 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.201686263
I0919 19:05:29.692317  805851 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.201686263 -xf /var/lib/minikube/build/build.201686263.tar
I0919 19:05:29.700859  805851 crio.go:315] Building image: /var/lib/minikube/build/build.201686263
I0919 19:05:29.700920  805851 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-141069 /var/lib/minikube/build/build.201686263 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0919 19:05:30.944165  805851 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-141069 /var/lib/minikube/build/build.201686263 --cgroup-manager=cgroupfs: (1.243219921s)
I0919 19:05:30.944228  805851 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.201686263
I0919 19:05:30.952937  805851 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.201686263.tar
I0919 19:05:30.961044  805851 build_images.go:217] Built localhost/my-image:functional-141069 from /tmp/build.201686263.tar
I0919 19:05:30.961071  805851 build_images.go:133] succeeded building to: functional-141069
I0919 19:05:30.961075  805851 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image rm kicbase/echo-server:functional-141069 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-141069 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-141069 tunnel --alsologtostderr] ...
E0919 19:11:57.207805  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-141069
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-141069
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-141069
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (150.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-128218 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:16:57.208175  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-128218 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m29.874278405s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (150.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-128218 -- rollout status deployment/busybox: (2.206692735s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-98fkr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-v5l4r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-zvx28 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-98fkr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-v5l4r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-zvx28 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-98fkr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-v5l4r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-zvx28 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-98fkr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-98fkr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-v5l4r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-v5l4r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-zvx28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-128218 -- exec busybox-7dff88458-zvx28 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-128218 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-128218 -v=7 --alsologtostderr: (29.460369316s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-128218 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp testdata/cp-test.txt ha-128218:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4071115053/001/cp-test_ha-128218.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218:/home/docker/cp-test.txt ha-128218-m02:/home/docker/cp-test_ha-128218_ha-128218-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test_ha-128218_ha-128218-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218:/home/docker/cp-test.txt ha-128218-m03:/home/docker/cp-test_ha-128218_ha-128218-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test_ha-128218_ha-128218-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218:/home/docker/cp-test.txt ha-128218-m04:/home/docker/cp-test_ha-128218_ha-128218-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test_ha-128218_ha-128218-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp testdata/cp-test.txt ha-128218-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4071115053/001/cp-test_ha-128218-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m02:/home/docker/cp-test.txt ha-128218:/home/docker/cp-test_ha-128218-m02_ha-128218.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test_ha-128218-m02_ha-128218.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m02:/home/docker/cp-test.txt ha-128218-m03:/home/docker/cp-test_ha-128218-m02_ha-128218-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test_ha-128218-m02_ha-128218-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m02:/home/docker/cp-test.txt ha-128218-m04:/home/docker/cp-test_ha-128218-m02_ha-128218-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test_ha-128218-m02_ha-128218-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp testdata/cp-test.txt ha-128218-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4071115053/001/cp-test_ha-128218-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m03:/home/docker/cp-test.txt ha-128218:/home/docker/cp-test_ha-128218-m03_ha-128218.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test_ha-128218-m03_ha-128218.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m03:/home/docker/cp-test.txt ha-128218-m02:/home/docker/cp-test_ha-128218-m03_ha-128218-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test_ha-128218-m03_ha-128218-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m03:/home/docker/cp-test.txt ha-128218-m04:/home/docker/cp-test_ha-128218-m03_ha-128218-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test_ha-128218-m03_ha-128218-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp testdata/cp-test.txt ha-128218-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4071115053/001/cp-test_ha-128218-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m04:/home/docker/cp-test.txt ha-128218:/home/docker/cp-test_ha-128218-m04_ha-128218.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218 "sudo cat /home/docker/cp-test_ha-128218-m04_ha-128218.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m04:/home/docker/cp-test.txt ha-128218-m02:/home/docker/cp-test_ha-128218-m04_ha-128218-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m02 "sudo cat /home/docker/cp-test_ha-128218-m04_ha-128218-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 cp ha-128218-m04:/home/docker/cp-test.txt ha-128218-m03:/home/docker/cp-test_ha-128218-m04_ha-128218-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 ssh -n ha-128218-m03 "sudo cat /home/docker/cp-test_ha-128218-m04_ha-128218-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 node stop m02 -v=7 --alsologtostderr
E0919 19:18:20.275286  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-128218 node stop m02 -v=7 --alsologtostderr: (11.811109642s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr: exit status 7 (663.726842ms)

                                                
                                                
-- stdout --
	ha-128218
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128218-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-128218-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:18:30.481457  831661 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:18:30.481583  831661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:18:30.481592  831661 out.go:358] Setting ErrFile to fd 2...
	I0919 19:18:30.481596  831661 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:18:30.481774  831661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:18:30.481939  831661 out.go:352] Setting JSON to false
	I0919 19:18:30.481971  831661 mustload.go:65] Loading cluster: ha-128218
	I0919 19:18:30.482010  831661 notify.go:220] Checking for updates...
	I0919 19:18:30.482401  831661 config.go:182] Loaded profile config "ha-128218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:18:30.482425  831661 status.go:174] checking status of ha-128218 ...
	I0919 19:18:30.482893  831661 cli_runner.go:164] Run: docker container inspect ha-128218 --format={{.State.Status}}
	I0919 19:18:30.500820  831661 status.go:364] ha-128218 host status = "Running" (err=<nil>)
	I0919 19:18:30.500856  831661 host.go:66] Checking if "ha-128218" exists ...
	I0919 19:18:30.501149  831661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128218
	I0919 19:18:30.518062  831661 host.go:66] Checking if "ha-128218" exists ...
	I0919 19:18:30.518306  831661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:18:30.518343  831661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128218
	I0919 19:18:30.534836  831661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/ha-128218/id_rsa Username:docker}
	I0919 19:18:30.644636  831661 ssh_runner.go:195] Run: systemctl --version
	I0919 19:18:30.648751  831661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:18:30.659711  831661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:18:30.709059  831661 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-19 19:18:30.699211549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:18:30.710042  831661 kubeconfig.go:125] found "ha-128218" server: "https://192.168.49.254:8443"
	I0919 19:18:30.710094  831661 api_server.go:166] Checking apiserver status ...
	I0919 19:18:30.710139  831661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:18:30.720982  831661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1506/cgroup
	I0919 19:18:30.729303  831661 api_server.go:182] apiserver freezer: "10:freezer:/docker/d53185e79feb70bb5b3699031dae97e165d4010779b002780fe5476f8ba1539e/crio/crio-87a2072f5f7ed49f6db5ced6e607baaaa59152067c78324a7011aa223faf5caf"
	I0919 19:18:30.729356  831661 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d53185e79feb70bb5b3699031dae97e165d4010779b002780fe5476f8ba1539e/crio/crio-87a2072f5f7ed49f6db5ced6e607baaaa59152067c78324a7011aa223faf5caf/freezer.state
	I0919 19:18:30.737155  831661 api_server.go:204] freezer state: "THAWED"
	I0919 19:18:30.737178  831661 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:18:30.740758  831661 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:18:30.740777  831661 status.go:456] ha-128218 apiserver status = Running (err=<nil>)
	I0919 19:18:30.740788  831661 status.go:176] ha-128218 status: &{Name:ha-128218 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:18:30.740805  831661 status.go:174] checking status of ha-128218-m02 ...
	I0919 19:18:30.741042  831661 cli_runner.go:164] Run: docker container inspect ha-128218-m02 --format={{.State.Status}}
	I0919 19:18:30.759261  831661 status.go:364] ha-128218-m02 host status = "Stopped" (err=<nil>)
	I0919 19:18:30.759283  831661 status.go:377] host is not running, skipping remaining checks
	I0919 19:18:30.759289  831661 status.go:176] ha-128218-m02 status: &{Name:ha-128218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:18:30.759344  831661 status.go:174] checking status of ha-128218-m03 ...
	I0919 19:18:30.759622  831661 cli_runner.go:164] Run: docker container inspect ha-128218-m03 --format={{.State.Status}}
	I0919 19:18:30.777854  831661 status.go:364] ha-128218-m03 host status = "Running" (err=<nil>)
	I0919 19:18:30.777884  831661 host.go:66] Checking if "ha-128218-m03" exists ...
	I0919 19:18:30.778200  831661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128218-m03
	I0919 19:18:30.795259  831661 host.go:66] Checking if "ha-128218-m03" exists ...
	I0919 19:18:30.795529  831661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:18:30.795586  831661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128218-m03
	I0919 19:18:30.812809  831661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33543 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/ha-128218-m03/id_rsa Username:docker}
	I0919 19:18:30.904438  831661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:18:30.915232  831661 kubeconfig.go:125] found "ha-128218" server: "https://192.168.49.254:8443"
	I0919 19:18:30.915258  831661 api_server.go:166] Checking apiserver status ...
	I0919 19:18:30.915289  831661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:18:30.924512  831661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1433/cgroup
	I0919 19:18:30.932925  831661 api_server.go:182] apiserver freezer: "10:freezer:/docker/6e74a042871cec9005ff9021a1a9f126698f7904c908b05ba9912a9533f4fb7d/crio/crio-5f974c1486b2f33f63e93dedc58f4b74503899d2ebf51a93281dbec04b78acac"
	I0919 19:18:30.932982  831661 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6e74a042871cec9005ff9021a1a9f126698f7904c908b05ba9912a9533f4fb7d/crio/crio-5f974c1486b2f33f63e93dedc58f4b74503899d2ebf51a93281dbec04b78acac/freezer.state
	I0919 19:18:30.940308  831661 api_server.go:204] freezer state: "THAWED"
	I0919 19:18:30.940333  831661 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 19:18:30.944250  831661 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 19:18:30.944273  831661 status.go:456] ha-128218-m03 apiserver status = Running (err=<nil>)
	I0919 19:18:30.944281  831661 status.go:176] ha-128218-m03 status: &{Name:ha-128218-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:18:30.944296  831661 status.go:174] checking status of ha-128218-m04 ...
	I0919 19:18:30.944522  831661 cli_runner.go:164] Run: docker container inspect ha-128218-m04 --format={{.State.Status}}
	I0919 19:18:30.962083  831661 status.go:364] ha-128218-m04 host status = "Running" (err=<nil>)
	I0919 19:18:30.962110  831661 host.go:66] Checking if "ha-128218-m04" exists ...
	I0919 19:18:30.962369  831661 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-128218-m04
	I0919 19:18:30.979190  831661 host.go:66] Checking if "ha-128218-m04" exists ...
	I0919 19:18:30.979513  831661 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:18:30.979555  831661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-128218-m04
	I0919 19:18:30.996166  831661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/ha-128218-m04/id_rsa Username:docker}
	I0919 19:18:31.088132  831661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:18:31.099045  831661 status.go:176] ha-128218-m04 status: &{Name:ha-128218-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 node start m02 -v=7 --alsologtostderr
E0919 19:18:38.263749  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.270159  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.281532  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.302904  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.344312  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.425752  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.587249  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:38.908970  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:39.551149  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:40.833106  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:43.394616  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:18:48.516398  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-128218 node start m02 -v=7 --alsologtostderr: (18.797322878s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr: (1.047842416s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (273.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-128218 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-128218 -v=7 --alsologtostderr
E0919 19:18:58.758050  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:19:19.240297  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-128218 -v=7 --alsologtostderr: (36.714881959s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-128218 --wait=true -v=7 --alsologtostderr
E0919 19:20:00.202667  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:21:22.124497  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:21:57.207429  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-128218 --wait=true -v=7 --alsologtostderr: (3m56.519063693s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-128218
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (273.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-128218 node delete m03 -v=7 --alsologtostderr: (11.306550693s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0919 19:23:38.264209  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 stop -v=7 --alsologtostderr
E0919 19:24:05.967190  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-128218 stop -v=7 --alsologtostderr: (35.436934276s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr: exit status 7 (99.664714ms)

                                                
                                                
-- stdout --
	ha-128218
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128218-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-128218-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:24:14.244290  850095 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:24:14.244423  850095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:14.244433  850095 out.go:358] Setting ErrFile to fd 2...
	I0919 19:24:14.244440  850095 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:24:14.244654  850095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:24:14.244851  850095 out.go:352] Setting JSON to false
	I0919 19:24:14.244897  850095 mustload.go:65] Loading cluster: ha-128218
	I0919 19:24:14.245002  850095 notify.go:220] Checking for updates...
	I0919 19:24:14.245375  850095 config.go:182] Loaded profile config "ha-128218": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:24:14.245401  850095 status.go:174] checking status of ha-128218 ...
	I0919 19:24:14.245872  850095 cli_runner.go:164] Run: docker container inspect ha-128218 --format={{.State.Status}}
	I0919 19:24:14.264946  850095 status.go:364] ha-128218 host status = "Stopped" (err=<nil>)
	I0919 19:24:14.264973  850095 status.go:377] host is not running, skipping remaining checks
	I0919 19:24:14.264981  850095 status.go:176] ha-128218 status: &{Name:ha-128218 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:24:14.265030  850095 status.go:174] checking status of ha-128218-m02 ...
	I0919 19:24:14.265371  850095 cli_runner.go:164] Run: docker container inspect ha-128218-m02 --format={{.State.Status}}
	I0919 19:24:14.281752  850095 status.go:364] ha-128218-m02 host status = "Stopped" (err=<nil>)
	I0919 19:24:14.281799  850095 status.go:377] host is not running, skipping remaining checks
	I0919 19:24:14.281826  850095 status.go:176] ha-128218-m02 status: &{Name:ha-128218-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:24:14.281870  850095 status.go:174] checking status of ha-128218-m04 ...
	I0919 19:24:14.282153  850095 cli_runner.go:164] Run: docker container inspect ha-128218-m04 --format={{.State.Status}}
	I0919 19:24:14.298615  850095 status.go:364] ha-128218-m04 host status = "Stopped" (err=<nil>)
	I0919 19:24:14.298654  850095 status.go:377] host is not running, skipping remaining checks
	I0919 19:24:14.298664  850095 status.go:176] ha-128218-m04 status: &{Name:ha-128218-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-128218 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-128218 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m6.287094985s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-128218 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-128218 --control-plane -v=7 --alsologtostderr: (1m6.052252622s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-128218 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-932291 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0919 19:26:57.207857  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-932291 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.530445045s)
--- PASS: TestJSONOutput/start/Command (67.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-932291 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-932291 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-932291 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-932291 --output=json --user=testUser: (5.729377244s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-514579 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-514579 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.350584ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b93f77d0-51fc-45d1-8ee7-e87038fd62e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-514579] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22379e45-f477-494d-b534-129c0b5a7d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"b33091e9-6de5-4744-a1ad-9f7145af64f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8b4ea937-4e94-45f3-a9a9-93b8e3daedb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig"}}
	{"specversion":"1.0","id":"61a7788e-3ace-4e7b-a0d0-0a7827eb5bc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube"}}
	{"specversion":"1.0","id":"b13f6771-8da7-4f35-899b-5b01bf6d552e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e9b9280-77fd-4a0f-af45-17c67c7af317","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f16073b-c0c7-4b65-98b8-732f9b9a5e48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-514579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-514579
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-437081 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-437081 --network=: (29.109538076s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-437081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-437081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-437081: (1.947850429s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.07s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-386419 --network=bridge
E0919 19:28:38.263521  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-386419 --network=bridge: (21.995513882s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-386419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-386419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-386419: (1.830037684s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.84s)

                                                
                                    
x
+
TestKicExistingNetwork (25.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 19:28:51.167666  760079 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 19:28:51.184353  760079 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 19:28:51.184429  760079 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 19:28:51.184447  760079 cli_runner.go:164] Run: docker network inspect existing-network
W0919 19:28:51.199934  760079 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 19:28:51.199975  760079 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 19:28:51.199990  760079 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 19:28:51.200108  760079 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 19:28:51.216526  760079 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-84e39a30fc38 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:ad:03:5d:c6} reservation:<nil>}
I0919 19:28:51.217047  760079 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000885000}
I0919 19:28:51.217075  760079 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 19:28:51.217117  760079 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 19:28:51.276345  760079 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-365451 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-365451 --network=existing-network: (23.521701901s)
helpers_test.go:175: Cleaning up "existing-network-365451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-365451
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-365451: (1.817950381s)
I0919 19:29:16.632333  760079 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.48s)

                                                
                                    
x
+
TestKicCustomSubnet (26.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-304526 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-304526 --subnet=192.168.60.0/24: (24.862855665s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-304526 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-304526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-304526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-304526: (2.055625876s)
--- PASS: TestKicCustomSubnet (26.94s)

                                                
                                    
x
+
TestKicStaticIP (23.47s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-360537 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-360537 --static-ip=192.168.200.200: (21.433766595s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-360537 ip
helpers_test.go:175: Cleaning up "static-ip-360537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-360537
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-360537: (1.915135817s)
--- PASS: TestKicStaticIP (23.47s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (51.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-176023 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-176023 --driver=docker  --container-runtime=crio: (21.912627332s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-191023 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-191023 --driver=docker  --container-runtime=crio: (24.500548212s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-176023
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-191023
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-191023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-191023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-191023: (1.828727963s)
helpers_test.go:175: Cleaning up "first-176023" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-176023
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-176023: (2.141323767s)
--- PASS: TestMinikubeProfile (51.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-881824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-881824 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.595628099s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-881824 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-927372 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-927372 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.293738593s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-927372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-881824 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-881824 --alsologtostderr -v=5: (1.605309795s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-927372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-927372
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-927372: (1.174287708s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-927372
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-927372: (6.199450722s)
--- PASS: TestMountStart/serial/RestartStopped (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-927372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-442320 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 19:31:57.208069  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-442320 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m11.837690941s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-442320 -- rollout status deployment/busybox: (2.219430776s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-9hhh4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-jnx5m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-9hhh4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-jnx5m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-9hhh4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-jnx5m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-9hhh4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-9hhh4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-jnx5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-442320 -- exec busybox-7dff88458-jnx5m -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-442320 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-442320 -v 3 --alsologtostderr: (55.038091756s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-442320 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --output json --alsologtostderr
E0919 19:33:38.264223  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp testdata/cp-test.txt multinode-442320:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2923301509/001/cp-test_multinode-442320.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320:/home/docker/cp-test.txt multinode-442320-m02:/home/docker/cp-test_multinode-442320_multinode-442320-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test_multinode-442320_multinode-442320-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320:/home/docker/cp-test.txt multinode-442320-m03:/home/docker/cp-test_multinode-442320_multinode-442320-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test_multinode-442320_multinode-442320-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp testdata/cp-test.txt multinode-442320-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2923301509/001/cp-test_multinode-442320-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m02:/home/docker/cp-test.txt multinode-442320:/home/docker/cp-test_multinode-442320-m02_multinode-442320.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test_multinode-442320-m02_multinode-442320.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m02:/home/docker/cp-test.txt multinode-442320-m03:/home/docker/cp-test_multinode-442320-m02_multinode-442320-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test_multinode-442320-m02_multinode-442320-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp testdata/cp-test.txt multinode-442320-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2923301509/001/cp-test_multinode-442320-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m03:/home/docker/cp-test.txt multinode-442320:/home/docker/cp-test_multinode-442320-m03_multinode-442320.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320 "sudo cat /home/docker/cp-test_multinode-442320-m03_multinode-442320.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 cp multinode-442320-m03:/home/docker/cp-test.txt multinode-442320-m02:/home/docker/cp-test_multinode-442320-m03_multinode-442320-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 ssh -n multinode-442320-m02 "sudo cat /home/docker/cp-test_multinode-442320-m03_multinode-442320-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-442320 node stop m03: (1.171176108s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-442320 status: exit status 7 (452.724317ms)

                                                
                                                
-- stdout --
	multinode-442320
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-442320-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-442320-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr: exit status 7 (461.970982ms)

                                                
                                                
-- stdout --
	multinode-442320
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-442320-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-442320-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:33:48.816732  915067 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:33:48.816843  915067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:33:48.816852  915067 out.go:358] Setting ErrFile to fd 2...
	I0919 19:33:48.816857  915067 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:33:48.817028  915067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:33:48.817192  915067 out.go:352] Setting JSON to false
	I0919 19:33:48.817220  915067 mustload.go:65] Loading cluster: multinode-442320
	I0919 19:33:48.817349  915067 notify.go:220] Checking for updates...
	I0919 19:33:48.817779  915067 config.go:182] Loaded profile config "multinode-442320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:33:48.817821  915067 status.go:174] checking status of multinode-442320 ...
	I0919 19:33:48.818355  915067 cli_runner.go:164] Run: docker container inspect multinode-442320 --format={{.State.Status}}
	I0919 19:33:48.837108  915067 status.go:364] multinode-442320 host status = "Running" (err=<nil>)
	I0919 19:33:48.837133  915067 host.go:66] Checking if "multinode-442320" exists ...
	I0919 19:33:48.837445  915067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-442320
	I0919 19:33:48.854400  915067 host.go:66] Checking if "multinode-442320" exists ...
	I0919 19:33:48.854718  915067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:33:48.854780  915067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-442320
	I0919 19:33:48.871675  915067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33653 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/multinode-442320/id_rsa Username:docker}
	I0919 19:33:48.964403  915067 ssh_runner.go:195] Run: systemctl --version
	I0919 19:33:48.968323  915067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:33:48.978952  915067 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:33:49.026176  915067 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-19 19:33:49.016792602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:33:49.026787  915067 kubeconfig.go:125] found "multinode-442320" server: "https://192.168.67.2:8443"
	I0919 19:33:49.026838  915067 api_server.go:166] Checking apiserver status ...
	I0919 19:33:49.026881  915067 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 19:33:49.037457  915067 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	I0919 19:33:49.046073  915067 api_server.go:182] apiserver freezer: "10:freezer:/docker/843ff7b5b95e06a11333d034b57911b39b854b4753494eaf3ddeb876733f6bba/crio/crio-58b2a9e5d392c9be25bd1e7ec7a66030c0c384ab65cab4f68d879abaf919dae1"
	I0919 19:33:49.046139  915067 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/843ff7b5b95e06a11333d034b57911b39b854b4753494eaf3ddeb876733f6bba/crio/crio-58b2a9e5d392c9be25bd1e7ec7a66030c0c384ab65cab4f68d879abaf919dae1/freezer.state
	I0919 19:33:49.053959  915067 api_server.go:204] freezer state: "THAWED"
	I0919 19:33:49.053982  915067 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 19:33:49.058405  915067 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 19:33:49.058426  915067 status.go:456] multinode-442320 apiserver status = Running (err=<nil>)
	I0919 19:33:49.058438  915067 status.go:176] multinode-442320 status: &{Name:multinode-442320 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:33:49.058458  915067 status.go:174] checking status of multinode-442320-m02 ...
	I0919 19:33:49.058713  915067 cli_runner.go:164] Run: docker container inspect multinode-442320-m02 --format={{.State.Status}}
	I0919 19:33:49.075657  915067 status.go:364] multinode-442320-m02 host status = "Running" (err=<nil>)
	I0919 19:33:49.075680  915067 host.go:66] Checking if "multinode-442320-m02" exists ...
	I0919 19:33:49.075949  915067 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-442320-m02
	I0919 19:33:49.093029  915067 host.go:66] Checking if "multinode-442320-m02" exists ...
	I0919 19:33:49.093332  915067 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 19:33:49.093386  915067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-442320-m02
	I0919 19:33:49.110012  915067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33658 SSHKeyPath:/home/jenkins/minikube-integration/19664-753213/.minikube/machines/multinode-442320-m02/id_rsa Username:docker}
	I0919 19:33:49.204402  915067 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 19:33:49.214762  915067 status.go:176] multinode-442320-m02 status: &{Name:multinode-442320-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:33:49.214799  915067 status.go:174] checking status of multinode-442320-m03 ...
	I0919 19:33:49.215079  915067 cli_runner.go:164] Run: docker container inspect multinode-442320-m03 --format={{.State.Status}}
	I0919 19:33:49.232435  915067 status.go:364] multinode-442320-m03 host status = "Stopped" (err=<nil>)
	I0919 19:33:49.232459  915067 status.go:377] host is not running, skipping remaining checks
	I0919 19:33:49.232468  915067 status.go:176] multinode-442320-m03 status: &{Name:multinode-442320-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-442320 node start m03 -v=7 --alsologtostderr: (8.344897853s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (101.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-442320
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-442320
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-442320: (24.664764215s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-442320 --wait=true -v=8 --alsologtostderr
E0919 19:35:00.277249  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:35:01.329292  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-442320 --wait=true -v=8 --alsologtostderr: (1m17.184911499s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-442320
--- PASS: TestMultiNode/serial/RestartKeepsNodes (101.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-442320 node delete m03: (4.660141099s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-442320 stop: (23.517157582s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-442320 status: exit status 7 (85.025789ms)

                                                
                                                
-- stdout --
	multinode-442320
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-442320-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr: exit status 7 (83.904935ms)

                                                
                                                
-- stdout --
	multinode-442320
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-442320-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:36:09.048560  924756 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:36:09.048706  924756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:36:09.048716  924756 out.go:358] Setting ErrFile to fd 2...
	I0919 19:36:09.048721  924756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:36:09.048893  924756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:36:09.049056  924756 out.go:352] Setting JSON to false
	I0919 19:36:09.049091  924756 mustload.go:65] Loading cluster: multinode-442320
	I0919 19:36:09.049216  924756 notify.go:220] Checking for updates...
	I0919 19:36:09.049622  924756 config.go:182] Loaded profile config "multinode-442320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:36:09.049650  924756 status.go:174] checking status of multinode-442320 ...
	I0919 19:36:09.050216  924756 cli_runner.go:164] Run: docker container inspect multinode-442320 --format={{.State.Status}}
	I0919 19:36:09.070035  924756 status.go:364] multinode-442320 host status = "Stopped" (err=<nil>)
	I0919 19:36:09.070069  924756 status.go:377] host is not running, skipping remaining checks
	I0919 19:36:09.070082  924756 status.go:176] multinode-442320 status: &{Name:multinode-442320 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 19:36:09.070120  924756 status.go:174] checking status of multinode-442320-m02 ...
	I0919 19:36:09.070384  924756 cli_runner.go:164] Run: docker container inspect multinode-442320-m02 --format={{.State.Status}}
	I0919 19:36:09.087439  924756 status.go:364] multinode-442320-m02 host status = "Stopped" (err=<nil>)
	I0919 19:36:09.087464  924756 status.go:377] host is not running, skipping remaining checks
	I0919 19:36:09.087470  924756 status.go:176] multinode-442320-m02 status: &{Name:multinode-442320-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-442320 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-442320 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (46.355067765s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-442320 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-442320
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-442320-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-442320-m02 --driver=docker  --container-runtime=crio: exit status 14 (65.114768ms)

                                                
                                                
-- stdout --
	* [multinode-442320-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-442320-m02' is duplicated with machine name 'multinode-442320-m02' in profile 'multinode-442320'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-442320-m03 --driver=docker  --container-runtime=crio
E0919 19:36:57.207440  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-442320-m03 --driver=docker  --container-runtime=crio: (19.73051244s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-442320
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-442320: exit status 80 (261.435176ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-442320 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-442320-m03 already exists in multinode-442320-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-442320-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-442320-m03: (1.842389215s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.94s)

                                                
                                    
x
+
TestPreload (105.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0919 19:38:38.264039  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996929 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m20.222209661s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-996929 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-996929 image pull gcr.io/k8s-minikube/busybox: (1.194391345s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-996929
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-996929: (5.647187552s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-996929 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-996929 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.080528662s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-996929 image list
helpers_test.go:175: Cleaning up "test-preload-996929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-996929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-996929: (1.924610535s)
--- PASS: TestPreload (105.29s)

                                                
                                    
x
+
TestScheduledStopUnix (97.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-203479 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-203479 --memory=2048 --driver=docker  --container-runtime=crio: (22.007349718s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-203479 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-203479 -n scheduled-stop-203479
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-203479 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 19:39:29.428897  760079 retry.go:31] will retry after 137.828µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.430067  760079 retry.go:31] will retry after 132.35µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.431217  760079 retry.go:31] will retry after 214.852µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.432349  760079 retry.go:31] will retry after 188.997µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.433473  760079 retry.go:31] will retry after 280.196µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.434616  760079 retry.go:31] will retry after 829.549µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.435731  760079 retry.go:31] will retry after 945.301µs: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.436855  760079 retry.go:31] will retry after 2.53409ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.440054  760079 retry.go:31] will retry after 3.528564ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.444253  760079 retry.go:31] will retry after 3.616242ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.448449  760079 retry.go:31] will retry after 6.60931ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.455682  760079 retry.go:31] will retry after 11.686197ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.467893  760079 retry.go:31] will retry after 13.254522ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.482153  760079 retry.go:31] will retry after 15.089359ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
I0919 19:39:29.497325  760079 retry.go:31] will retry after 24.209733ms: open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/scheduled-stop-203479/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-203479 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-203479 -n scheduled-stop-203479
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-203479
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-203479 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-203479
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-203479: exit status 7 (64.024594ms)

                                                
                                                
-- stdout --
	scheduled-stop-203479
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-203479 -n scheduled-stop-203479
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-203479 -n scheduled-stop-203479: exit status 7 (63.321072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-203479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-203479
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-203479: (4.115308141s)
--- PASS: TestScheduledStopUnix (97.41s)

                                                
                                    
x
+
TestInsufficientStorage (12.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-993394 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-993394 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.004444995s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89af3442-3f23-4a2e-9596-8e46b2b4e672","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-993394] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b830a04-a3f9-4e5e-aad3-4c5c8f1bd5f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19664"}}
	{"specversion":"1.0","id":"03c2e23d-4dbf-43cc-b002-a5395e4feecf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19687c31-43b6-43d7-847e-c1f417984f1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig"}}
	{"specversion":"1.0","id":"91afddb6-3e48-43ca-a0f0-c549d3d4abf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube"}}
	{"specversion":"1.0","id":"8f3887fb-cdd1-483c-9f51-e361dd7c7214","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"99b1b57d-6350-4a60-9bb7-076a0317274b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5619ea96-d399-41e6-bfaf-a8e587067861","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bd0dc379-1ad8-4ca2-b090-9e029746804c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f7a03573-2c3e-458d-ac9e-1e3358985148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11ac8bfe-2560-414b-a4eb-9fbce228775f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"27b2bf7e-4292-4ba6-96b3-56110c282146","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-993394\" primary control-plane node in \"insufficient-storage-993394\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9c555b0-938d-4dd1-baeb-811297b42005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0bf28bc-1629-4485-b901-62f2d9d98900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"af8ca6b2-ca55-4e92-8858-f0cb44cd2652","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-993394 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-993394 --output=json --layout=cluster: exit status 7 (255.175255ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-993394","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-993394","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:40:54.689504  947012 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-993394" does not appear in /home/jenkins/minikube-integration/19664-753213/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-993394 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-993394 --output=json --layout=cluster: exit status 7 (254.341315ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-993394","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-993394","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 19:40:54.944435  947111 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-993394" does not appear in /home/jenkins/minikube-integration/19664-753213/kubeconfig
	E0919 19:40:54.954086  947111 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/insufficient-storage-993394/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-993394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-993394
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-993394: (1.807983949s)
--- PASS: TestInsufficientStorage (12.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.782486435 start -p running-upgrade-475297 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.782486435 start -p running-upgrade-475297 --memory=2200 --vm-driver=docker  --container-runtime=crio: (26.939093727s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-475297 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-475297 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.903579086s)
helpers_test.go:175: Cleaning up "running-upgrade-475297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-475297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-475297: (2.938084075s)
--- PASS: TestRunningBinaryUpgrade (62.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.155797829s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-270444
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-270444: (1.186417747s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-270444 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-270444 status --format={{.Host}}: exit status 7 (80.435334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.568535677s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-270444 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (65.828332ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-270444] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-270444
	    minikube start -p kubernetes-upgrade-270444 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2704442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-270444 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-270444 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.308718055s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-270444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-270444
E0919 19:46:57.207465  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-270444: (2.129880996s)
--- PASS: TestKubernetesUpgrade (360.56s)

                                                
                                    
x
+
TestMissingContainerUpgrade (128.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.535527148 start -p missing-upgrade-180119 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.535527148 start -p missing-upgrade-180119 --memory=2200 --driver=docker  --container-runtime=crio: (1m4.923295691s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-180119
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-180119: (10.532872842s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-180119
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-180119 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-180119 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.650169599s)
helpers_test.go:175: Cleaning up "missing-upgrade-180119" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-180119
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-180119: (4.333097828s)
--- PASS: TestMissingContainerUpgrade (128.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (80.033546ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-060710] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-060710 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-060710 --driver=docker  --container-runtime=crio: (41.168832084s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-060710 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --driver=docker  --container-runtime=crio: (19.395103213s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-060710 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-060710 status -o json: exit status 2 (296.474964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-060710","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-060710
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-060710: (2.223649653s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0919 19:41:57.207490  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (70.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.269619676 start -p stopped-upgrade-047028 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.269619676 start -p stopped-upgrade-047028 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.421092837s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.269619676 -p stopped-upgrade-047028 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.269619676 -p stopped-upgrade-047028 stop: (4.155672903s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-047028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-047028 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.042428268s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (70.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-060710 --no-kubernetes --driver=docker  --container-runtime=crio: (11.87473236s)
--- PASS: TestNoKubernetes/serial/Start (11.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-060710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-060710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (251.389699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (35.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.886383516s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.478023834s)
--- PASS: TestNoKubernetes/serial/ProfileList (35.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-060710
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-060710: (1.234341135s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-060710 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-060710 --driver=docker  --container-runtime=crio: (7.16807529s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-060710 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-060710 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.11809ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (41.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-433191 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-433191 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.733127897s)
--- PASS: TestPause/serial/Start (41.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-047028
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-047028: (1.044538819s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-832378 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-832378 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (142.901061ms)

                                                
                                                
-- stdout --
	* [false-832378] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 19:43:44.405549  989127 out.go:345] Setting OutFile to fd 1 ...
	I0919 19:43:44.405808  989127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:43:44.405818  989127 out.go:358] Setting ErrFile to fd 2...
	I0919 19:43:44.405824  989127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0919 19:43:44.406012  989127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19664-753213/.minikube/bin
	I0919 19:43:44.406655  989127 out.go:352] Setting JSON to false
	I0919 19:43:44.407928  989127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":15974,"bootTime":1726759050,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 19:43:44.408028  989127 start.go:139] virtualization: kvm guest
	I0919 19:43:44.410233  989127 out.go:177] * [false-832378] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0919 19:43:44.411620  989127 out.go:177]   - MINIKUBE_LOCATION=19664
	I0919 19:43:44.411734  989127 notify.go:220] Checking for updates...
	I0919 19:43:44.414077  989127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 19:43:44.415251  989127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19664-753213/kubeconfig
	I0919 19:43:44.416436  989127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19664-753213/.minikube
	I0919 19:43:44.417604  989127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 19:43:44.418812  989127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 19:43:44.420527  989127 config.go:182] Loaded profile config "kubernetes-upgrade-270444": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:43:44.420644  989127 config.go:182] Loaded profile config "pause-433191": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0919 19:43:44.420720  989127 config.go:182] Loaded profile config "running-upgrade-475297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0919 19:43:44.420808  989127 driver.go:394] Setting default libvirt URI to qemu:///system
	I0919 19:43:44.443160  989127 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0919 19:43:44.443271  989127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 19:43:44.493714  989127 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:true NGoroutines:83 SystemTime:2024-09-19 19:43:44.484026979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 19:43:44.493827  989127 docker.go:318] overlay module found
	I0919 19:43:44.495774  989127 out.go:177] * Using the docker driver based on user configuration
	I0919 19:43:44.496925  989127 start.go:297] selected driver: docker
	I0919 19:43:44.496935  989127 start.go:901] validating driver "docker" against <nil>
	I0919 19:43:44.496946  989127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 19:43:44.499054  989127 out.go:201] 
	W0919 19:43:44.500274  989127 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 19:43:44.501594  989127 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-832378 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:42:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-270444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-433191
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:43 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-475297
contexts:
- context:
cluster: kubernetes-upgrade-270444
user: kubernetes-upgrade-270444
name: kubernetes-upgrade-270444
- context:
cluster: pause-433191
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-433191
name: pause-433191
- context:
cluster: running-upgrade-475297
user: running-upgrade-475297
name: running-upgrade-475297
current-context: running-upgrade-475297
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-270444
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.key
- name: pause-433191
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.key
- name: running-upgrade-475297
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/running-upgrade-475297/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/running-upgrade-475297/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-832378

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-832378"

                                                
                                                
----------------------- debugLogs end: false-832378 [took: 2.875124216s] --------------------------------
helpers_test.go:175: Cleaning up "false-832378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-832378
--- PASS: TestNetworkPlugins/group/false (3.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.38s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-433191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-433191 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.357973596s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-433191 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-433191 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-433191 --output=json --layout=cluster: exit status 2 (304.841285ms)

                                                
                                                
-- stdout --
	{"Name":"pause-433191","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-433191","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-433191 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-433191 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.78s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-433191 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-433191 --alsologtostderr -v=5: (2.778009791s)
--- PASS: TestPause/serial/DeletePaused (2.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.243109711s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-433191
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-433191: exit status 1 (24.230985ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-433191: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-027603 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-027603 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m13.853531383s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-090659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-090659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (51.470905454s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-090659 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [210bb29f-6041-4c9d-99d2-336ae7f50a69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [210bb29f-6041-4c9d-99d2-336ae7f50a69] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004410765s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-090659 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-090659 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-090659 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-090659 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-090659 --alsologtostderr -v=3: (11.890654499s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-090659 -n no-preload-090659
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-090659 -n no-preload-090659: exit status 7 (64.018542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-090659 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-090659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-090659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.763452921s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-090659 -n no-preload-090659
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-027603 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0673bf83-35e5-4afb-be8b-7ba2d78a9e69] Pending
helpers_test.go:344: "busybox" [0673bf83-35e5-4afb-be8b-7ba2d78a9e69] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0673bf83-35e5-4afb-be8b-7ba2d78a9e69] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.004034851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-027603 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-027603 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-027603 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-027603 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-027603 --alsologtostderr -v=3: (11.943675204s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-495521 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-495521 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m10.315140268s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-027603 -n old-k8s-version-027603
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-027603 -n old-k8s-version-027603: exit status 7 (71.33299ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-027603 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (127.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-027603 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-027603 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m6.869178872s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-027603 -n old-k8s-version-027603
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (127.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-616115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-616115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m10.372521506s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-495521 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b3c9cdf-6ec9-428f-a9e5-39245cf83eba] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b3c9cdf-6ec9-428f-a9e5-39245cf83eba] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004159177s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-495521 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-495521 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-495521 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-495521 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-495521 --alsologtostderr -v=3: (11.856809263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-495521 -n embed-certs-495521
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-495521 -n embed-certs-495521: exit status 7 (70.7134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-495521 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-495521 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0919 19:48:38.263991  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-495521 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.328390713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-495521 -n embed-certs-495521
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616115 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e053271-884b-4b8f-a32a-6efdbf55b5ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e053271-884b-4b8f-a32a-6efdbf55b5ff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003830011s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-616115 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-616115 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-616115 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-616115 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-616115 --alsologtostderr -v=3: (11.882427137s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115: exit status 7 (64.210202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-616115 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-616115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-616115 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.755213319s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zswg9" [6c3be8d4-6cf5-4fba-9a64-cf1d25e179ce] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00344968s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-zswg9" [6c3be8d4-6cf5-4fba-9a64-cf1d25e179ce] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003801826s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-027603 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-027603 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-027603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-027603 -n old-k8s-version-027603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-027603 -n old-k8s-version-027603: exit status 2 (308.21147ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-027603 -n old-k8s-version-027603
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-027603 -n old-k8s-version-027603: exit status 2 (307.527552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-027603 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-027603 -n old-k8s-version-027603
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-027603 -n old-k8s-version-027603
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-297206 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-297206 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (28.109903875s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-297206 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-297206 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-297206 --alsologtostderr -v=3: (1.197042308s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297206 -n newest-cni-297206
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297206 -n newest-cni-297206: exit status 7 (68.499501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-297206 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-297206 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-297206 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (14.16241741s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-297206 -n newest-cni-297206
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-297206 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-297206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297206 -n newest-cni-297206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297206 -n newest-cni-297206: exit status 2 (286.992218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-297206 -n newest-cni-297206
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-297206 -n newest-cni-297206: exit status 2 (293.030787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-297206 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-297206 -n newest-cni-297206
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-297206 -n newest-cni-297206
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.226491035s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sm8jh" [092561d6-eed0-4c38-bef6-91aecf2d41ee] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003869449s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-sm8jh" [092561d6-eed0-4c38-bef6-91aecf2d41ee] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004215501s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-090659 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-090659 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-090659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-090659 -n no-preload-090659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-090659 -n no-preload-090659: exit status 2 (295.531956ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-090659 -n no-preload-090659
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-090659 -n no-preload-090659: exit status 2 (304.333638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-090659 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-090659 -n no-preload-090659
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-090659 -n no-preload-090659
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0919 19:50:37.872397  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:37.913951  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:37.996107  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:38.158833  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:38.480537  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:39.122717  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:40.404746  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:42.966909  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:48.088803  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:50:58.331158  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/no-preload-090659/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (50.509121327s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-832378 "pgrep -a kubelet"
I0919 19:50:59.565919  760079 config.go:182] Loaded profile config "auto-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ncvvq" [0c61e4c0-5694-45c0-9419-4fed37e0068d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ncvvq" [0c61e4c0-5694-45c0-9419-4fed37e0068d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004606638s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.943365921s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m4s9l" [e9e9625a-1340-411e-a222-9fd9fdadef78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00417683s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-832378 "pgrep -a kubelet"
I0919 19:51:34.673324  760079 config.go:182] Loaded profile config "flannel-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8w8ss" [cda41671-7a31-416d-b0ef-782c4d5c11e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8w8ss" [cda41671-7a31-416d-b0ef-782c4d5c11e0] Running
E0919 19:51:40.279127  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/addons-685250/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:41.330756  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:43.855882  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:43.862290  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:43.873714  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:43.895146  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:43.936629  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:44.018039  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:44.179454  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
E0919 19:51:44.501089  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003993762s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0919 19:51:45.142676  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0919 19:52:24.832197  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.011376394s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-832378 "pgrep -a kubelet"
I0919 19:52:36.392953  760079 config.go:182] Loaded profile config "enable-default-cni-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8z49m" [03b06b33-b0ec-4991-ab74-277702b417d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8z49m" [03b06b33-b0ec-4991-ab74-277702b417d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004397934s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-832378 "pgrep -a kubelet"
I0919 19:52:41.885411  760079 config.go:182] Loaded profile config "bridge-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rlh8j" [6d76d2d8-c9d7-4067-bd10-32cfcdac8d62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rlh8j" [6d76d2d8-c9d7-4067-bd10-32cfcdac8d62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.00446089s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8cd7m" [e456bcfd-9804-4ab0-aa93-e05cc45175dc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003112778s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-832378 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-832378 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123278328s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 19:53:06.184446  760079 retry.go:31] will retry after 1.339896543s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-832378 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-832378 exec deployment/netcat -- nslookup kubernetes.default: (5.148142876s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8cd7m" [e456bcfd-9804-4ab0-aa93-e05cc45175dc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004070402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-495521 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-495521 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-495521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-495521 -n embed-certs-495521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-495521 -n embed-certs-495521: exit status 2 (323.856799ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-495521 -n embed-certs-495521
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-495521 -n embed-certs-495521: exit status 2 (311.197511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-495521 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-495521 -n embed-certs-495521
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-495521 -n embed-certs-495521
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0919 19:53:05.793815  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (54.548914623s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.466938369s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-832378 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (46.538952743s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-72vms" [768d9f0c-41bb-41b3-b8c8-d6dcc190a9ed] Running
E0919 19:53:38.263868  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/functional-141069/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00362491s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-72vms" [768d9f0c-41bb-41b3-b8c8-d6dcc190a9ed] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004449633s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-616115 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-616115 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-616115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115: exit status 2 (334.141773ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115: exit status 2 (323.841372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-616115 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-616115 -n default-k8s-diff-port-616115
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x2qq2" [6fe48117-8315-4fc2-9865-03fcc1d3b8fe] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004774744s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-832378 "pgrep -a kubelet"
I0919 19:53:54.171590  760079 config.go:182] Loaded profile config "kindnet-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x2nzv" [7973ed6e-8bef-430b-8eb1-6f4dd75e5c77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x2nzv" [7973ed6e-8bef-430b-8eb1-6f4dd75e5c77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005066581s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dbs2m" [777e7008-2074-41bb-aedf-8a6f566bfb72] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004287318s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-832378 "pgrep -a kubelet"
I0919 19:54:06.118213  760079 config.go:182] Loaded profile config "calico-832378": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l7kmn" [9222a290-1e05-4639-8ab6-043572270ecc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l7kmn" [9222a290-1e05-4639-8ab6-043572270ecc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004681356s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-832378 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-832378 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x9mwh" [aecbd45d-6c48-413a-a3e6-c2e87289197c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x9mwh" [aecbd45d-6c48-413a-a3e6-c2e87289197c] Running
E0919 19:54:27.715457  760079 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/old-k8s-version-027603/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002981945s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-832378 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-832378 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-944604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-944604
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-832378 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:42:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-270444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-433191
contexts:
- context:
cluster: kubernetes-upgrade-270444
user: kubernetes-upgrade-270444
name: kubernetes-upgrade-270444
- context:
cluster: pause-433191
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-433191
name: pause-433191
current-context: pause-433191
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-270444
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.key
- name: pause-433191
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-832378

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-832378"

                                                
                                                
----------------------- debugLogs end: kubenet-832378 [took: 2.943063073s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-832378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-832378
--- SKIP: TestNetworkPlugins/group/kubenet (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-832378 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-832378" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:42:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-270444
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-433191
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19664-753213/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:43 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-475297
contexts:
- context:
cluster: kubernetes-upgrade-270444
user: kubernetes-upgrade-270444
name: kubernetes-upgrade-270444
- context:
cluster: pause-433191
extensions:
- extension:
last-update: Thu, 19 Sep 2024 19:43:33 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-433191
name: pause-433191
- context:
cluster: running-upgrade-475297
user: running-upgrade-475297
name: running-upgrade-475297
current-context: running-upgrade-475297
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-270444
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/kubernetes-upgrade-270444/client.key
- name: pause-433191
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/pause-433191/client.key
- name: running-upgrade-475297
user:
client-certificate: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/running-upgrade-475297/client.crt
client-key: /home/jenkins/minikube-integration/19664-753213/.minikube/profiles/running-upgrade-475297/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-832378

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-832378" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-832378"

                                                
                                                
----------------------- debugLogs end: cilium-832378 [took: 3.305203657s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-832378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-832378
--- SKIP: TestNetworkPlugins/group/cilium (3.45s)

                                                
                                    
Copied to clipboard