Test Report: Docker_Windows 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (3/340)

Order failed test Duration
33 TestAddons/parallel/Registry 79.77
56 TestErrorSpam/setup 67.37
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.32
x
+
TestAddons/parallel/Registry (79.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 11.242ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kzz9h" [a9a220b4-295f-4642-a504-2a586405571c] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0120745s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jnlf8" [abc14e2f-1004-4195-a33b-94ce29de0c38] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.072845s
addons_test.go:342: (dbg) Run:  kubectl --context addons-000400 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-000400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-000400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.3072627s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-000400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-000400
helpers_test.go:235: (dbg) docker inspect addons-000400:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85",
	        "Created": "2024-09-17T16:58:39.632573252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19130,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T16:58:48.661015086Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85/hostname",
	        "HostsPath": "/var/lib/docker/containers/8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85/hosts",
	        "LogPath": "/var/lib/docker/containers/8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85/8975097f0b11e6b7818040237f930c99a3b9f576fcb38f072fef2be62421fa85-json.log",
	        "Name": "/addons-000400",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-000400:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-000400",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b945c45fa93500f0ba5f07d36ca325871f4e67eb145da9a40a37732b93173396-init/diff:/var/lib/docker/overlay2/af5d248a82a7dcbc887b000566b84b9011e4a8e13e36234ddfbc9ecd69f656b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b945c45fa93500f0ba5f07d36ca325871f4e67eb145da9a40a37732b93173396/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b945c45fa93500f0ba5f07d36ca325871f4e67eb145da9a40a37732b93173396/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b945c45fa93500f0ba5f07d36ca325871f4e67eb145da9a40a37732b93173396/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-000400",
	                "Source": "/var/lib/docker/volumes/addons-000400/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-000400",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-000400",
	                "name.minikube.sigs.k8s.io": "addons-000400",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b6efa031432ffb5e038e9584b0159d34fa2679c12fcca3f117d03e8bd881b336",
	            "SandboxKey": "/var/run/docker/netns/b6efa031432f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53750"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53751"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53752"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53753"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53754"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-000400": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fb6929152cc2a646f07afba31ee55bf2c93a63f77819aa57ac7679db0594ff60",
	                    "EndpointID": "e64672fedd955f8746a410dd32f40c9e0b1ef021678b3d2a4a5725b89c8863a8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-000400",
	                        "8975097f0b11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-000400 -n addons-000400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-000400 -n addons-000400: (1.3592635s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 logs -n 25: (3.9786846s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p download-only-073100                                                                     | download-only-073100   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only                                                                     | download-only-831300   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-831300                                                                     |                        |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| delete  | -p download-only-831300                                                                     | download-only-831300   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| delete  | -p download-only-073100                                                                     | download-only-073100   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| delete  | -p download-only-831300                                                                     | download-only-831300   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| start   | --download-only -p                                                                          | download-docker-908600 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | download-docker-908600                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p download-docker-908600                                                                   | download-docker-908600 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-946000   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | binary-mirror-946000                                                                        |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |                   |         |                     |                     |
	|         | http://127.0.0.1:53675                                                                      |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p binary-mirror-946000                                                                     | binary-mirror-946000   | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 16:56 UTC |
	| addons  | disable dashboard -p                                                                        | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | addons-000400                                                                               |                        |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC |                     |
	|         | addons-000400                                                                               |                        |                   |         |                     |                     |
	| start   | -p addons-000400 --wait=true                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:56 UTC | 17 Sep 24 17:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                        |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |                   |         |                     |                     |
	|         | --driver=docker --addons=ingress                                                            |                        |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons disable                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:06 UTC | 17 Sep 24 17:06 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:14 UTC | 17 Sep 24 17:14 UTC |
	|         | addons-000400                                                                               |                        |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:14 UTC | 17 Sep 24 17:14 UTC |
	|         | -p addons-000400                                                                            |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| ssh     | addons-000400 ssh cat                                                                       | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | /opt/local-path-provisioner/pvc-cc8235e9-11e6-4250-b675-625b2079ef9c_default_test-pvc/file1 |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons disable                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons disable                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons disable                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons                                                                        | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons                                                                        | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | disable metrics-server                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons                                                                        | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC | 17 Sep 24 17:15 UTC |
	|         | disable volumesnapshots                                                                     |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:15 UTC |                     |
	|         | addons-000400                                                                               |                        |                   |         |                     |                     |
	| addons  | addons-000400 addons disable                                                                | addons-000400          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:16 UTC |                     |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:56:18
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:56:18.236478    4408 out.go:345] Setting OutFile to fd 964 ...
	I0917 16:56:18.318507    4408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:18.318507    4408 out.go:358] Setting ErrFile to fd 776...
	I0917 16:56:18.318507    4408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:56:18.340706    4408 out.go:352] Setting JSON to false
	I0917 16:56:18.344273    4408 start.go:129] hostinfo: {"hostname":"minikube2","uptime":6906,"bootTime":1726585272,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 16:56:18.344419    4408 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 16:56:18.349719    4408 out.go:177] * [addons-000400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 16:56:18.354677    4408 notify.go:220] Checking for updates...
	I0917 16:56:18.357278    4408 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 16:56:18.360279    4408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 16:56:18.363300    4408 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 16:56:18.365288    4408 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 16:56:18.368433    4408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 16:56:18.371462    4408 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:56:18.548123    4408 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 16:56:18.556874    4408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:56:18.871694    4408 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:56:18.839374763 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:56:18.874957    4408 out.go:177] * Using the docker driver based on user configuration
	I0917 16:56:18.878042    4408 start.go:297] selected driver: docker
	I0917 16:56:18.878042    4408 start.go:901] validating driver "docker" against <nil>
	I0917 16:56:18.878042    4408 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 16:56:18.941012    4408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:56:19.262355    4408 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:56:19.230879818 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:56:19.263972    4408 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:56:19.267344    4408 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 16:56:19.291461    4408 out.go:177] * Using Docker Desktop driver with root privileges
	I0917 16:56:19.294925    4408 cni.go:84] Creating CNI manager for ""
	I0917 16:56:19.295007    4408 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:19.295007    4408 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:56:19.295007    4408 start.go:340] cluster config:
	{Name:addons-000400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-000400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:19.298171    4408 out.go:177] * Starting "addons-000400" primary control-plane node in "addons-000400" cluster
	I0917 16:56:19.301806    4408 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:56:19.303364    4408 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:56:19.307058    4408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:19.307058    4408 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:56:19.307058    4408 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 16:56:19.307948    4408 cache.go:56] Caching tarball of preloaded images
	I0917 16:56:19.308298    4408 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 16:56:19.308418    4408 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 16:56:19.309170    4408 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\config.json ...
	I0917 16:56:19.309608    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\config.json: {Name:mkb5360e8f7da2beb09eb2c0e0a01c2b1151d520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:56:19.385463    4408 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:56:19.385603    4408 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:56:19.385883    4408 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:56:19.385946    4408 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:56:19.386103    4408 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:56:19.386103    4408 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:56:19.386319    4408 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:56:19.386319    4408 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0917 16:56:19.386415    4408 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:57:33.513211    4408 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0917 16:57:33.513211    4408 cache.go:194] Successfully downloaded all kic artifacts
	I0917 16:57:33.513750    4408 start.go:360] acquireMachinesLock for addons-000400: {Name:mk25f6529a30bc47c0d421b3a098f03c82cffe57 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 16:57:33.513976    4408 start.go:364] duration metric: took 100µs to acquireMachinesLock for "addons-000400"
	I0917 16:57:33.514295    4408 start.go:93] Provisioning new machine with config: &{Name:addons-000400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-000400 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:57:33.514583    4408 start.go:125] createHost starting for "" (driver="docker")
	I0917 16:57:33.517640    4408 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 16:57:33.518477    4408 start.go:159] libmachine.API.Create for "addons-000400" (driver="docker")
	I0917 16:57:33.518538    4408 client.go:168] LocalClient.Create starting
	I0917 16:57:33.519993    4408 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0917 16:57:33.759080    4408 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0917 16:57:34.235958    4408 cli_runner.go:164] Run: docker network inspect addons-000400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 16:57:34.303971    4408 cli_runner.go:211] docker network inspect addons-000400 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 16:57:34.317404    4408 network_create.go:284] running [docker network inspect addons-000400] to gather additional debugging logs...
	I0917 16:57:34.317404    4408 cli_runner.go:164] Run: docker network inspect addons-000400
	W0917 16:57:34.387758    4408 cli_runner.go:211] docker network inspect addons-000400 returned with exit code 1
	I0917 16:57:34.387758    4408 network_create.go:287] error running [docker network inspect addons-000400]: docker network inspect addons-000400: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-000400 not found
	I0917 16:57:34.387758    4408 network_create.go:289] output of [docker network inspect addons-000400]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-000400 not found
	
	** /stderr **
	I0917 16:57:34.396524    4408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 16:57:34.484157    4408 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001281b90}
	I0917 16:57:34.484157    4408 network_create.go:124] attempt to create docker network addons-000400 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 16:57:34.491107    4408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-000400 addons-000400
	I0917 16:57:34.665440    4408 network_create.go:108] docker network addons-000400 192.168.49.0/24 created
	I0917 16:57:34.665440    4408 kic.go:121] calculated static IP "192.168.49.2" for the "addons-000400" container
	I0917 16:57:34.687065    4408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 16:57:34.772049    4408 cli_runner.go:164] Run: docker volume create addons-000400 --label name.minikube.sigs.k8s.io=addons-000400 --label created_by.minikube.sigs.k8s.io=true
	I0917 16:57:34.851612    4408 oci.go:103] Successfully created a docker volume addons-000400
	I0917 16:57:34.860492    4408 cli_runner.go:164] Run: docker run --rm --name addons-000400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-000400 --entrypoint /usr/bin/test -v addons-000400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0917 16:58:00.084691    4408 cli_runner.go:217] Completed: docker run --rm --name addons-000400-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-000400 --entrypoint /usr/bin/test -v addons-000400:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (25.2239991s)
	I0917 16:58:00.084691    4408 oci.go:107] Successfully prepared a docker volume addons-000400
	I0917 16:58:00.084691    4408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:58:00.084691    4408 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 16:58:00.094669    4408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-000400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 16:58:38.908150    4408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-000400:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (38.8130732s)
	I0917 16:58:38.908235    4408 kic.go:203] duration metric: took 38.823187s to extract preloaded images to volume ...
	I0917 16:58:38.917101    4408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:58:39.233757    4408 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-09-17 16:58:39.200670236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:58:39.243255    4408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 16:58:39.563295    4408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-000400 --name addons-000400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-000400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-000400 --network addons-000400 --ip 192.168.49.2 --volume addons-000400:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0917 16:58:49.191096    4408 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-000400 --name addons-000400 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-000400 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-000400 --network addons-000400 --ip 192.168.49.2 --volume addons-000400:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4: (9.627723s)
	I0917 16:58:49.207007    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Running}}
	I0917 16:58:49.411187    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:58:49.606371    4408 cli_runner.go:164] Run: docker exec addons-000400 stat /var/lib/dpkg/alternatives/iptables
	I0917 16:58:50.009212    4408 oci.go:144] the created container "addons-000400" has a running status.
	I0917 16:58:50.009212    4408 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa...
	I0917 16:58:50.293468    4408 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 16:58:50.517774    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:58:50.618802    4408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 16:58:50.618802    4408 kic_runner.go:114] Args: [docker exec --privileged addons-000400 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 16:58:50.852528    4408 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa...
	I0917 16:58:54.853738    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:58:55.039101    4408 machine.go:93] provisionDockerMachine start ...
	I0917 16:58:55.052073    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:55.225062    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:55.240081    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:55.240081    4408 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 16:58:55.462120    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-000400
	
	I0917 16:58:55.462120    4408 ubuntu.go:169] provisioning hostname "addons-000400"
	I0917 16:58:55.475142    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:55.651125    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:55.651125    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:55.651125    4408 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-000400 && echo "addons-000400" | sudo tee /etc/hostname
	I0917 16:58:55.883135    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-000400
	
	I0917 16:58:55.897136    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:56.072145    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:56.072145    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:56.072145    4408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-000400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-000400/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-000400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 16:58:56.277796    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 16:58:56.277796    4408 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0917 16:58:56.277796    4408 ubuntu.go:177] setting up certificates
	I0917 16:58:56.277796    4408 provision.go:84] configureAuth start
	I0917 16:58:56.293764    4408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-000400
	I0917 16:58:56.469474    4408 provision.go:143] copyHostCerts
	I0917 16:58:56.470455    4408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1679 bytes)
	I0917 16:58:56.472425    4408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0917 16:58:56.474445    4408 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0917 16:58:56.475446    4408 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-000400 san=[127.0.0.1 192.168.49.2 addons-000400 localhost minikube]
	I0917 16:58:56.839625    4408 provision.go:177] copyRemoteCerts
	I0917 16:58:56.851208    4408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 16:58:56.858215    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:56.938938    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:58:57.085539    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 16:58:57.146730    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 16:58:57.206908    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 16:58:57.264923    4408 provision.go:87] duration metric: took 987.1196ms to configureAuth
	I0917 16:58:57.264923    4408 ubuntu.go:193] setting minikube options for container-runtime
	I0917 16:58:57.265919    4408 config.go:182] Loaded profile config "addons-000400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:58:57.281539    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:57.456316    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:57.457342    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:57.457342    4408 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 16:58:57.683908    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 16:58:57.684823    4408 ubuntu.go:71] root file system type: overlay
	I0917 16:58:57.684823    4408 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 16:58:57.696822    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:57.890996    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:57.890996    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:57.890996    4408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 16:58:58.133545    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 16:58:58.144527    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:58:58.315052    4408 main.go:141] libmachine: Using SSH client type: native
	I0917 16:58:58.315052    4408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 53750 <nil> <nil>}
	I0917 16:58:58.315052    4408 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 16:59:00.260884    4408 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-17 16:58:58.116566955 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0917 16:59:00.260884    4408 machine.go:96] duration metric: took 5.2217414s to provisionDockerMachine
	I0917 16:59:00.261878    4408 client.go:171] duration metric: took 1m26.7426466s to LocalClient.Create
	I0917 16:59:00.261878    4408 start.go:167] duration metric: took 1m26.7428388s to libmachine.API.Create "addons-000400"
	I0917 16:59:00.261878    4408 start.go:293] postStartSetup for "addons-000400" (driver="docker")
	I0917 16:59:00.261878    4408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 16:59:00.281908    4408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 16:59:00.295055    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:00.417885    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:00.592911    4408 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 16:59:00.605889    4408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 16:59:00.605889    4408 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 16:59:00.605889    4408 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 16:59:00.605889    4408 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 16:59:00.605889    4408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0917 16:59:00.606893    4408 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0917 16:59:00.606893    4408 start.go:296] duration metric: took 345.0122ms for postStartSetup
	I0917 16:59:00.623897    4408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-000400
	I0917 16:59:00.795902    4408 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\config.json ...
	I0917 16:59:00.811894    4408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 16:59:00.818886    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:00.888387    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:01.045820    4408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 16:59:01.063203    4408 start.go:128] duration metric: took 1m27.5478163s to createHost
	I0917 16:59:01.063286    4408 start.go:83] releasing machines lock for "addons-000400", held for 1m27.5485133s
	I0917 16:59:01.072363    4408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-000400
	I0917 16:59:01.146946    4408 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0917 16:59:01.158906    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:01.159642    4408 ssh_runner.go:195] Run: cat /version.json
	I0917 16:59:01.167313    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:01.231563    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:01.238574    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	W0917 16:59:01.360669    4408 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0917 16:59:01.384074    4408 ssh_runner.go:195] Run: systemctl --version
	I0917 16:59:01.418416    4408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 16:59:01.445335    4408 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0917 16:59:01.464351    4408 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0917 16:59:01.476336    4408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0917 16:59:01.495519    4408 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0917 16:59:01.495519    4408 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0917 16:59:01.548235    4408 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 16:59:01.548331    4408 start.go:495] detecting cgroup driver to use...
	I0917 16:59:01.548331    4408 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:59:01.548331    4408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:59:01.594286    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 16:59:01.634759    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 16:59:01.660305    4408 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 16:59:01.674665    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 16:59:01.712500    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:59:01.751169    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 16:59:01.798638    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 16:59:01.859554    4408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 16:59:01.903574    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 16:59:01.958531    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 16:59:02.001632    4408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 16:59:02.058208    4408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 16:59:02.091172    4408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 16:59:02.124190    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:02.369475    4408 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 16:59:02.632838    4408 start.go:495] detecting cgroup driver to use...
	I0917 16:59:02.632838    4408 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 16:59:02.654749    4408 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 16:59:02.714714    4408 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0917 16:59:02.736702    4408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 16:59:02.770706    4408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 16:59:02.860974    4408 ssh_runner.go:195] Run: which cri-dockerd
	I0917 16:59:02.899964    4408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 16:59:02.931035    4408 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 16:59:03.001896    4408 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 16:59:03.312934    4408 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 16:59:03.611462    4408 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 16:59:03.611462    4408 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 16:59:03.682496    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:03.934449    4408 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 16:59:04.947025    4408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 16:59:04.993702    4408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:59:05.031211    4408 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 16:59:05.214528    4408 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 16:59:05.377210    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:05.542599    4408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 16:59:05.585166    4408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 16:59:05.626754    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:05.803389    4408 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 16:59:05.961132    4408 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 16:59:05.973725    4408 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 16:59:05.986083    4408 start.go:563] Will wait 60s for crictl version
	I0917 16:59:05.998081    4408 ssh_runner.go:195] Run: which crictl
	I0917 16:59:06.018155    4408 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 16:59:06.093976    4408 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 16:59:06.103824    4408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:59:06.177066    4408 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 16:59:06.235428    4408 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 16:59:06.243880    4408 cli_runner.go:164] Run: docker exec -t addons-000400 dig +short host.docker.internal
	I0917 16:59:06.440729    4408 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0917 16:59:06.454161    4408 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0917 16:59:06.464607    4408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:59:06.495352    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:06.572269    4408 kubeadm.go:883] updating cluster {Name:addons-000400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-000400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 16:59:06.572560    4408 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:59:06.581704    4408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:59:06.631524    4408 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:59:06.631682    4408 docker.go:615] Images already preloaded, skipping extraction
	I0917 16:59:06.645978    4408 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 16:59:06.701387    4408 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 16:59:06.701547    4408 cache_images.go:84] Images are preloaded, skipping loading
	I0917 16:59:06.701547    4408 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0917 16:59:06.701907    4408 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-000400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-000400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 16:59:06.711806    4408 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 16:59:06.829179    4408 cni.go:84] Creating CNI manager for ""
	I0917 16:59:06.829179    4408 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:59:06.829179    4408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 16:59:06.829179    4408 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-000400 NodeName:addons-000400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 16:59:06.829179    4408 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-000400"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 16:59:06.842866    4408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 16:59:06.869425    4408 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 16:59:06.882831    4408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 16:59:06.906801    4408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0917 16:59:06.946404    4408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 16:59:06.986301    4408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0917 16:59:07.038622    4408 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 16:59:07.053120    4408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 16:59:07.097042    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:07.267370    4408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:59:07.301803    4408 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400 for IP: 192.168.49.2
	I0917 16:59:07.301975    4408 certs.go:194] generating shared ca certs ...
	I0917 16:59:07.302051    4408 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.302696    4408 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0917 16:59:07.515083    4408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt ...
	I0917 16:59:07.515083    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt: {Name:mkc5b851ca682f7aff857055d591694d36175fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.517094    4408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key ...
	I0917 16:59:07.517094    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key: {Name:mk9089fc50aceda2aa3f2747811085b675041b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.518083    4408 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0917 16:59:07.727137    4408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0917 16:59:07.727137    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd5c7d70e5d33d063f91e60ee9bd4852fbc5909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.728503    4408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key ...
	I0917 16:59:07.728503    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkbb7b28a2f5e99a3e449ce85c8a848dee712fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.729746    4408 certs.go:256] generating profile certs ...
	I0917 16:59:07.730155    4408 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.key
	I0917 16:59:07.730155    4408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.crt with IP's: []
	I0917 16:59:07.865402    4408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.crt ...
	I0917 16:59:07.865402    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.crt: {Name:mkd3157062cd41f747df8b6384547ab766c3046a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.866106    4408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.key ...
	I0917 16:59:07.866106    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\client.key: {Name:mk5a9ff1c0e320daa987a65257edcc1fd392c704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:07.867204    4408 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key.1de5f6c5
	I0917 16:59:07.868313    4408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt.1de5f6c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 16:59:08.176135    4408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt.1de5f6c5 ...
	I0917 16:59:08.176135    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt.1de5f6c5: {Name:mk130e3688021f96ef3ee1c5d0a8ec6ab66bced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:08.176790    4408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key.1de5f6c5 ...
	I0917 16:59:08.177793    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key.1de5f6c5: {Name:mkcde0e890bc8f1dcadf087167d60f7a36bfefd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:08.177940    4408 certs.go:381] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt.1de5f6c5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt
	I0917 16:59:08.190140    4408 certs.go:385] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key.1de5f6c5 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key
	I0917 16:59:08.190516    4408 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.key
	I0917 16:59:08.191547    4408 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.crt with IP's: []
	I0917 16:59:08.353859    4408 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.crt ...
	I0917 16:59:08.354867    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.crt: {Name:mk8e7dfea037ae9ea5c284eab36d2a30fa18ea27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:08.355880    4408 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.key ...
	I0917 16:59:08.355880    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.key: {Name:mke378ec39af35c792f0f6ccd5fe9e53040fd1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:08.367434    4408 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0917 16:59:08.368440    4408 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0917 16:59:08.368440    4408 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0917 16:59:08.368440    4408 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0917 16:59:08.370445    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 16:59:08.425726    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 16:59:08.478448    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 16:59:08.538029    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 16:59:08.591593    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 16:59:08.646493    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 16:59:08.691467    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 16:59:08.733486    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-000400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 16:59:08.796161    4408 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 16:59:08.848563    4408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 16:59:08.897832    4408 ssh_runner.go:195] Run: openssl version
	I0917 16:59:08.925875    4408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 16:59:08.959672    4408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:59:08.970876    4408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:59 /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:59:08.982864    4408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 16:59:09.007837    4408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 16:59:09.038768    4408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 16:59:09.053814    4408 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 16:59:09.053814    4408 kubeadm.go:392] StartCluster: {Name:addons-000400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-000400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:59:09.060868    4408 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 16:59:09.119847    4408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 16:59:09.162658    4408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 16:59:09.185899    4408 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 16:59:09.199896    4408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 16:59:09.222451    4408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 16:59:09.222451    4408 kubeadm.go:157] found existing configuration files:
	
	I0917 16:59:09.234100    4408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 16:59:09.256444    4408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 16:59:09.270365    4408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 16:59:09.304948    4408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 16:59:09.330950    4408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 16:59:09.343757    4408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 16:59:09.378242    4408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 16:59:09.396007    4408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 16:59:09.407268    4408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 16:59:09.441061    4408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 16:59:09.462392    4408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 16:59:09.474277    4408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 16:59:09.496523    4408 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 16:59:09.566066    4408 kubeadm.go:310] W0917 16:59:09.563297    1969 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:59:09.566792    4408 kubeadm.go:310] W0917 16:59:09.564253    1969 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 16:59:09.602894    4408 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0917 16:59:09.731796    4408 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 16:59:25.527261    4408 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 16:59:25.528502    4408 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 16:59:25.528826    4408 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 16:59:25.528826    4408 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 16:59:25.528826    4408 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 16:59:25.529568    4408 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 16:59:25.532300    4408 out.go:235]   - Generating certificates and keys ...
	I0917 16:59:25.532300    4408 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 16:59:25.532300    4408 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 16:59:25.532860    4408 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 16:59:25.533015    4408 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 16:59:25.533168    4408 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 16:59:25.533321    4408 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 16:59:25.533321    4408 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 16:59:25.534075    4408 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-000400 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:59:25.534226    4408 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 16:59:25.534586    4408 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-000400 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 16:59:25.534898    4408 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 16:59:25.535218    4408 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 16:59:25.535449    4408 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 16:59:25.535681    4408 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 16:59:25.535681    4408 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 16:59:25.535681    4408 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 16:59:25.535681    4408 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 16:59:25.536295    4408 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 16:59:25.536415    4408 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 16:59:25.536415    4408 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 16:59:25.536415    4408 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 16:59:25.539080    4408 out.go:235]   - Booting up control plane ...
	I0917 16:59:25.539080    4408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 16:59:25.540105    4408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 16:59:25.540105    4408 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 16:59:25.540105    4408 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 16:59:25.540997    4408 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 16:59:25.540997    4408 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 16:59:25.540997    4408 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 16:59:25.541729    4408 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 16:59:25.541786    4408 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002444819s
	I0917 16:59:25.542101    4408 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 16:59:25.542101    4408 kubeadm.go:310] [api-check] The API server is healthy after 8.003165119s
	I0917 16:59:25.542101    4408 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 16:59:25.542667    4408 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 16:59:25.542667    4408 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 16:59:25.543512    4408 kubeadm.go:310] [mark-control-plane] Marking the node addons-000400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 16:59:25.544119    4408 kubeadm.go:310] [bootstrap-token] Using token: 4nlmb0.z4v6crossr100ojz
	I0917 16:59:25.546446    4408 out.go:235]   - Configuring RBAC rules ...
	I0917 16:59:25.547193    4408 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 16:59:25.547618    4408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 16:59:25.548117    4408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 16:59:25.548567    4408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 16:59:25.549015    4408 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 16:59:25.549214    4408 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 16:59:25.549578    4408 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 16:59:25.549688    4408 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 16:59:25.549897    4408 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 16:59:25.549897    4408 kubeadm.go:310] 
	I0917 16:59:25.549897    4408 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 16:59:25.549897    4408 kubeadm.go:310] 
	I0917 16:59:25.549897    4408 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 16:59:25.549897    4408 kubeadm.go:310] 
	I0917 16:59:25.549897    4408 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 16:59:25.550567    4408 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 16:59:25.550888    4408 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 16:59:25.550888    4408 kubeadm.go:310] 
	I0917 16:59:25.550888    4408 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 16:59:25.550888    4408 kubeadm.go:310] 
	I0917 16:59:25.551513    4408 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 16:59:25.551639    4408 kubeadm.go:310] 
	I0917 16:59:25.551913    4408 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 16:59:25.551913    4408 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 16:59:25.551913    4408 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 16:59:25.551913    4408 kubeadm.go:310] 
	I0917 16:59:25.552579    4408 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 16:59:25.553076    4408 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 16:59:25.553132    4408 kubeadm.go:310] 
	I0917 16:59:25.553324    4408 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 4nlmb0.z4v6crossr100ojz \
	I0917 16:59:25.553638    4408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:108abcd620a0a206e52f0c2a0517b47dde71b0878871284c4173eae1f5b7a19d \
	I0917 16:59:25.553889    4408 kubeadm.go:310] 	--control-plane 
	I0917 16:59:25.553936    4408 kubeadm.go:310] 
	I0917 16:59:25.554254    4408 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 16:59:25.554335    4408 kubeadm.go:310] 
	I0917 16:59:25.554723    4408 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 4nlmb0.z4v6crossr100ojz \
	I0917 16:59:25.555328    4408 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:108abcd620a0a206e52f0c2a0517b47dde71b0878871284c4173eae1f5b7a19d 
	I0917 16:59:25.555402    4408 cni.go:84] Creating CNI manager for ""
	I0917 16:59:25.555402    4408 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:59:25.561518    4408 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0917 16:59:25.579083    4408 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0917 16:59:25.630682    4408 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0917 16:59:25.809720    4408 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 16:59:25.830346    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:25.831453    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-000400 minikube.k8s.io/updated_at=2024_09_17T16_59_25_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-000400 minikube.k8s.io/primary=true
	I0917 16:59:25.833270    4408 ops.go:34] apiserver oom_adj: -16
	I0917 16:59:26.131440    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:26.633287    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:27.131536    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:27.630125    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:28.129802    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:28.631956    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:29.132455    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:29.629524    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:30.136861    4408 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 16:59:30.513966    4408 kubeadm.go:1113] duration metric: took 4.7040001s to wait for elevateKubeSystemPrivileges
	I0917 16:59:30.514041    4408 kubeadm.go:394] duration metric: took 21.4600482s to StartCluster
	I0917 16:59:30.514041    4408 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:30.514174    4408 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 16:59:30.516478    4408 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:59:30.518745    4408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 16:59:30.518937    4408 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 16:59:30.519094    4408 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 16:59:30.519468    4408 addons.go:69] Setting yakd=true in profile "addons-000400"
	I0917 16:59:30.519528    4408 addons.go:69] Setting cloud-spanner=true in profile "addons-000400"
	I0917 16:59:30.519685    4408 addons.go:234] Setting addon cloud-spanner=true in "addons-000400"
	I0917 16:59:30.519685    4408 addons.go:234] Setting addon yakd=true in "addons-000400"
	I0917 16:59:30.519810    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.519810    4408 config.go:182] Loaded profile config "addons-000400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:59:30.519810    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.519810    4408 addons.go:69] Setting inspektor-gadget=true in profile "addons-000400"
	I0917 16:59:30.520558    4408 addons.go:69] Setting volumesnapshots=true in profile "addons-000400"
	I0917 16:59:30.520347    4408 addons.go:69] Setting storage-provisioner=true in profile "addons-000400"
	I0917 16:59:30.520766    4408 addons.go:69] Setting ingress-dns=true in profile "addons-000400"
	I0917 16:59:30.520826    4408 addons.go:234] Setting addon storage-provisioner=true in "addons-000400"
	I0917 16:59:30.520826    4408 addons.go:234] Setting addon ingress-dns=true in "addons-000400"
	I0917 16:59:30.520870    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.521032    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:234] Setting addon inspektor-gadget=true in "addons-000400"
	I0917 16:59:30.521032    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting metrics-server=true in profile "addons-000400"
	I0917 16:59:30.521687    4408 addons.go:234] Setting addon metrics-server=true in "addons-000400"
	I0917 16:59:30.521687    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-000400"
	I0917 16:59:30.522492    4408 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-000400"
	I0917 16:59:30.522492    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting registry=true in profile "addons-000400"
	I0917 16:59:30.522492    4408 addons.go:234] Setting addon registry=true in "addons-000400"
	I0917 16:59:30.522492    4408 out.go:177] * Verifying Kubernetes components...
	I0917 16:59:30.522492    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-000400"
	I0917 16:59:30.523497    4408 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-000400"
	I0917 16:59:30.520558    4408 addons.go:69] Setting gcp-auth=true in profile "addons-000400"
	I0917 16:59:30.523497    4408 mustload.go:65] Loading cluster: addons-000400
	I0917 16:59:30.523497    4408 config.go:182] Loaded profile config "addons-000400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 16:59:30.520558    4408 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-000400"
	I0917 16:59:30.523497    4408 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-000400"
	I0917 16:59:30.524514    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting default-storageclass=true in profile "addons-000400"
	I0917 16:59:30.525499    4408 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-000400"
	I0917 16:59:30.520558    4408 addons.go:69] Setting ingress=true in profile "addons-000400"
	I0917 16:59:30.525499    4408 addons.go:234] Setting addon ingress=true in "addons-000400"
	I0917 16:59:30.526498    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting helm-tiller=true in profile "addons-000400"
	I0917 16:59:30.526498    4408 addons.go:234] Setting addon helm-tiller=true in "addons-000400"
	I0917 16:59:30.527502    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520766    4408 addons.go:234] Setting addon volumesnapshots=true in "addons-000400"
	I0917 16:59:30.528499    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.520558    4408 addons.go:69] Setting volcano=true in profile "addons-000400"
	I0917 16:59:30.528499    4408 addons.go:234] Setting addon volcano=true in "addons-000400"
	I0917 16:59:30.528499    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.560390    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.560390    4408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 16:59:30.564155    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.567692    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.567692    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.568696    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.568696    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.574627    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.576587    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.577548    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.582331    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.582331    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.586614    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.591464    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.592607    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.613617    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.615800    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.719501    4408 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 16:59:30.724503    4408 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 16:59:30.724503    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 16:59:30.730485    4408 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 16:59:30.737482    4408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:59:30.737482    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 16:59:30.740507    4408 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 16:59:30.743512    4408 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:59:30.743512    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 16:59:30.746510    4408 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 16:59:30.746510    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.746510    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 16:59:30.746510    4408 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0917 16:59:30.749485    4408 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 16:59:30.752521    4408 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 16:59:30.753501    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.754489    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 16:59:30.757510    4408 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0917 16:59:30.759491    4408 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 16:59:30.760495    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 16:59:30.763489    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.766518    4408 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 16:59:30.770552    4408 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 16:59:30.771486    4408 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0917 16:59:30.772490    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.772490    4408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 16:59:30.773492    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.773492    4408 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 16:59:30.784675    4408 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 16:59:30.780183    4408 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:59:30.781699    4408 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0917 16:59:30.787535    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 16:59:30.787535    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0917 16:59:30.788081    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 16:59:30.791030    4408 addons.go:234] Setting addon default-storageclass=true in "addons-000400"
	I0917 16:59:30.792067    4408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:59:30.792673    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.795135    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 16:59:30.795135    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 16:59:30.798207    4408 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 16:59:30.800133    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.800133    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.801142    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.803144    4408 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 16:59:30.803144    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 16:59:30.815158    4408 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 16:59:30.810148    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.811155    4408 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 16:59:30.819139    4408 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 16:59:30.820145    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.821145    4408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 16:59:30.825173    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 16:59:30.825173    4408 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 16:59:30.829144    4408 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-000400"
	I0917 16:59:30.832141    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:30.830143    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.833148    4408 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:59:30.833148    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 16:59:30.838139    4408 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:59:30.838139    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 16:59:30.841139    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 16:59:30.842148    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.845144    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.848493    4408 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 16:59:30.852411    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 16:59:30.852411    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 16:59:30.857210    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.860750    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.883458    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:30.887171    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:30.898807    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.920222    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.953063    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.954102    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.978090    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.987065    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:30.995067    4408 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 16:59:30.995067    4408 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 16:59:30.995067    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.003070    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.011093    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.012084    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.016060    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:31.020099    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.024083    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.051673    4408 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 53753 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0917 16:59:31.053333    4408 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 16:59:31.061510    4408 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0917 16:59:31.061510    4408 out.go:177]   - Using image docker.io/busybox:stable
	I0917 16:59:31.067507    4408 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:59:31.067507    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 16:59:31.067507    4408 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 16:59:31.069590    4408 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 16:59:31.071506    4408 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 16:59:31.071506    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 16:59:31.075507    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:31.082513    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:31.101516    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	W0917 16:59:31.115697    4408 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 16:59:31.116010    4408 retry.go:31] will retry after 240.244625ms: ssh: handshake failed: EOF
	W0917 16:59:31.115697    4408 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 16:59:31.116130    4408 retry.go:31] will retry after 136.827676ms: ssh: handshake failed: EOF
	I0917 16:59:31.166669    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.166669    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:31.711382    4408 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1926277s)
	I0917 16:59:31.712281    4408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 16:59:31.819517    4408 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.2580044s)
	I0917 16:59:31.836252    4408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 16:59:32.016238    4408 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 16:59:32.016238    4408 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 16:59:32.037056    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 16:59:32.116161    4408 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0917 16:59:32.116161    4408 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0917 16:59:32.116950    4408 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 16:59:32.117027    4408 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 16:59:32.215738    4408 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 16:59:32.215738    4408 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 16:59:32.235123    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 16:59:32.238241    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 16:59:32.238241    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 16:59:32.239851    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 16:59:32.342151    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 16:59:32.417997    4408 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 16:59:32.417997    4408 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 16:59:32.434696    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 16:59:32.434696    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 16:59:32.515717    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 16:59:32.515717    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 16:59:32.515717    4408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 16:59:32.515717    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 16:59:32.610795    4408 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 16:59:32.610795    4408 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 16:59:32.718823    4408 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:59:32.718823    4408 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 16:59:32.718823    4408 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0917 16:59:32.718823    4408 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 16:59:32.718823    4408 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 16:59:32.718823    4408 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 16:59:33.015788    4408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 16:59:33.016050    4408 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 16:59:33.017886    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 16:59:33.017886    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 16:59:33.215360    4408 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:59:33.215360    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 16:59:33.311422    4408 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 16:59:33.311422    4408 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 16:59:33.316848    4408 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 16:59:33.317027    4408 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 16:59:33.416198    4408 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 16:59:33.416198    4408 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 16:59:33.536563    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0917 16:59:33.715347    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 16:59:33.715347    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 16:59:33.833171    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 16:59:33.913767    4408 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:59:33.913767    4408 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 16:59:33.913767    4408 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:59:33.913767    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 16:59:34.015527    4408 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 16:59:34.015527    4408 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 16:59:34.015527    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 16:59:34.015527    4408 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 16:59:34.316337    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 16:59:34.316337    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 16:59:34.331288    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 16:59:34.531041    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 16:59:34.810075    4408 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 16:59:34.810075    4408 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 16:59:34.810831    4408 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:59:34.810831    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 16:59:34.916743    4408 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 16:59:34.916796    4408 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 16:59:35.418257    4408 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 16:59:35.418379    4408 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 16:59:35.516086    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 16:59:35.516086    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 16:59:35.632265    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 16:59:36.215230    4408 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:59:36.215230    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 16:59:36.418022    4408 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.7056692s)
	I0917 16:59:36.418167    4408 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0917 16:59:36.418290    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.3811979s)
	I0917 16:59:36.418167    4408 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.5818765s)
	I0917 16:59:36.431152    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:36.506481    4408 node_ready.go:35] waiting up to 6m0s for node "addons-000400" to be "Ready" ...
	I0917 16:59:36.711937    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 16:59:36.711937    4408 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 16:59:36.818542    4408 node_ready.go:49] node "addons-000400" has status "Ready":"True"
	I0917 16:59:36.818750    4408 node_ready.go:38] duration metric: took 312.0581ms for node "addons-000400" to be "Ready" ...
	I0917 16:59:36.818793    4408 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 16:59:37.216174    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 16:59:37.216174    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 16:59:37.234008    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 16:59:37.328175    4408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:37.815912    4408 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-000400" context rescaled to 1 replicas
	I0917 16:59:37.916153    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 16:59:37.916240    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 16:59:38.618058    4408 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:59:38.618185    4408 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 16:59:39.031832    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 16:59:39.616060    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:42.016623    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:44.413892    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:44.813393    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (12.5781653s)
	I0917 16:59:45.830221    4408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 16:59:45.839869    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:45.929186    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:47.114983    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:47.322429    4408 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 16:59:47.708605    4408 addons.go:234] Setting addon gcp-auth=true in "addons-000400"
	I0917 16:59:47.709012    4408 host.go:66] Checking if "addons-000400" exists ...
	I0917 16:59:47.737687    4408 cli_runner.go:164] Run: docker container inspect addons-000400 --format={{.State.Status}}
	I0917 16:59:47.835044    4408 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 16:59:47.842042    4408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-000400
	I0917 16:59:47.922415    4408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53750 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-000400\id_rsa Username:docker}
	I0917 16:59:49.212853    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:51.627141    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:54.210374    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:56.809266    4408 pod_ready.go:103] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status "Ready":"False"
	I0917 16:59:58.613318    4408 pod_ready.go:98] pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-17 16:59:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:59:42 +0000 UTC,FinishedAt:2024-09-17 16:59:54 +0000 UTC,ContainerID:docker://9fd7e659e2577095a3ec7c09162eae7f56ab850289f58ae9023b46b967d6fc23,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://9fd7e659e2577095a3ec7c09162eae7f56ab850289f58ae9023b46b967d6fc23 Started:0xc0014b5810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002331c00} {Name:kube-api-access-62cbn MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc002331c10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:59:58.613445    4408 pod_ready.go:82] duration metric: took 21.2839223s for pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace to be "Ready" ...
	E0917 16:59:58.613655    4408 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-27bdr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 16:59:30 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-17 16:59:30 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-17 16:59:42 +0000 UTC,FinishedAt:2024-09-17 16:59:54 +0000 UTC,ContainerID:docker://9fd7e659e2577095a3ec7c09162eae7f56ab850289f58ae9023b46b967d6fc23,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://9fd7e659e2577095a3ec7c09162eae7f56ab850289f58ae9023b46b967d6fc23 Started:0xc0014b5810 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002331c00} {Name:kube-api-access-62cbn MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002331c10}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 16:59:58.613655    4408 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vxl6h" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:58.922710    4408 pod_ready.go:93] pod "coredns-7c65d6cfc9-vxl6h" in "kube-system" namespace has status "Ready":"True"
	I0917 16:59:58.923667    4408 pod_ready.go:82] duration metric: took 310.0085ms for pod "coredns-7c65d6cfc9-vxl6h" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:58.923667    4408 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:59.124608    4408 pod_ready.go:93] pod "etcd-addons-000400" in "kube-system" namespace has status "Ready":"True"
	I0917 16:59:59.124608    4408 pod_ready.go:82] duration metric: took 200.9393ms for pod "etcd-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:59.124608    4408 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:59.608812    4408 pod_ready.go:93] pod "kube-apiserver-addons-000400" in "kube-system" namespace has status "Ready":"True"
	I0917 16:59:59.608905    4408 pod_ready.go:82] duration metric: took 484.2932ms for pod "kube-apiserver-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 16:59:59.609187    4408 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.012888    4408 pod_ready.go:93] pod "kube-controller-manager-addons-000400" in "kube-system" namespace has status "Ready":"True"
	I0917 17:00:00.012888    4408 pod_ready.go:82] duration metric: took 403.6395ms for pod "kube-controller-manager-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.012888    4408 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fcj2x" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.307653    4408 pod_ready.go:93] pod "kube-proxy-fcj2x" in "kube-system" namespace has status "Ready":"True"
	I0917 17:00:00.307653    4408 pod_ready.go:82] duration metric: took 294.7633ms for pod "kube-proxy-fcj2x" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.307810    4408 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.614514    4408 pod_ready.go:93] pod "kube-scheduler-addons-000400" in "kube-system" namespace has status "Ready":"True"
	I0917 17:00:00.614514    4408 pod_ready.go:82] duration metric: took 306.7019ms for pod "kube-scheduler-addons-000400" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:00.614514    4408 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:01.909420    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (29.6708663s)
	I0917 17:00:01.909468    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (29.6709783s)
	I0917 17:00:01.909619    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (29.6694771s)
	I0917 17:00:01.909727    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (29.5671515s)
	I0917 17:00:01.909774    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (29.4748314s)
	I0917 17:00:01.909974    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (29.4749376s)
	I0917 17:00:01.909974    4408 addons.go:475] Verifying addon ingress=true in "addons-000400"
	I0917 17:00:01.909974    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (28.3731731s)
	I0917 17:00:01.910192    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (27.5786733s)
	I0917 17:00:01.910192    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (28.0767141s)
	I0917 17:00:01.910395    4408 addons.go:475] Verifying addon registry=true in "addons-000400"
	I0917 17:00:01.910395    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (27.3791243s)
	I0917 17:00:01.910395    4408 addons.go:475] Verifying addon metrics-server=true in "addons-000400"
	I0917 17:00:01.910807    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (26.2782574s)
	I0917 17:00:01.910965    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (24.6766333s)
	I0917 17:00:01.913667    4408 out.go:177] * Verifying ingress addon...
	W0917 17:00:01.914124    4408 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 17:00:01.916876    4408 out.go:177] * Verifying registry addon...
	I0917 17:00:01.924862    4408 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-000400 service yakd-dashboard -n yakd-dashboard
	
	I0917 17:00:01.925211    4408 retry.go:31] will retry after 322.669145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 17:00:01.933822    4408 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 17:00:01.935298    4408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 17:00:02.221525    4408 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 17:00:02.221525    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:02.221525    4408 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 17:00:02.221525    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:02.261343    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 17:00:02.618062    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:02.618062    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:02.823605    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:03.013557    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:03.014344    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:03.533819    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:03.534260    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:04.021919    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:04.022964    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:04.029992    4408 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (16.1948121s)
	I0917 17:00:04.029992    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (24.997951s)
	I0917 17:00:04.029992    4408 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-000400"
	I0917 17:00:04.037330    4408 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 17:00:04.039157    4408 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 17:00:04.045696    4408 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 17:00:04.047159    4408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 17:00:04.048670    4408 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 17:00:04.048670    4408 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 17:00:04.111246    4408 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 17:00:04.111294    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:04.217696    4408 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 17:00:04.217696    4408 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 17:00:04.418933    4408 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 17:00:04.418933    4408 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 17:00:04.614089    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:04.614089    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:04.615402    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:04.630909    4408 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 17:00:05.007582    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:05.008727    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:05.110605    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:05.130667    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:05.509644    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:05.509950    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:05.610878    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:06.011626    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:06.012893    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:06.111210    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:06.510675    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:06.511756    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:06.611614    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:07.010934    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:07.012991    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:07.115700    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:07.221072    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:07.405716    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.1437617s)
	I0917 17:00:07.518054    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:07.518576    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:07.711294    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:08.028398    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:08.030252    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:08.130723    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:08.509100    4408 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.8780889s)
	I0917 17:00:08.517651    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:08.523489    4408 addons.go:475] Verifying addon gcp-auth=true in "addons-000400"
	I0917 17:00:08.528195    4408 out.go:177] * Verifying gcp-auth addon...
	I0917 17:00:08.537061    4408 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 17:00:08.616757    4408 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 17:00:08.619058    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:08.620882    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:08.944346    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:08.945278    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:09.055660    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:09.449637    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:09.449913    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:09.556100    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:09.637748    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:09.944106    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:09.944791    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:10.057910    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:10.443398    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:10.443398    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:10.554593    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:10.945047    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:10.945390    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:11.055300    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:11.447934    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:11.447934    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:11.556933    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:11.944453    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:11.945222    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:12.055825    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:12.129930    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:12.443213    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:12.443636    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:12.557534    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:12.943272    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:12.943984    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:13.056212    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:13.442980    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:13.444954    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:13.557930    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:13.943441    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:13.944346    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:14.058011    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:14.130696    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:14.442842    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:14.443505    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:14.557874    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:14.943579    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:14.944221    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:15.056417    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:15.444219    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:15.445555    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:15.557218    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:15.944070    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:15.945480    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:16.056899    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:16.442024    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:16.443917    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:16.556457    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:16.630617    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:16.944029    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:16.944029    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:17.056869    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:17.444361    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:17.445088    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:17.556255    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:17.944532    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:17.945708    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:18.055967    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:18.446626    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:18.446760    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:18.557913    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:18.947035    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:18.947671    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:19.056479    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:19.132736    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:19.444145    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:19.446542    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:19.559114    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:19.943073    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:19.943644    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:20.059477    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:20.444978    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:20.445498    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:20.559824    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:20.945150    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:20.947829    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:21.057818    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:21.136625    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:21.446589    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:21.448157    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:21.556884    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:21.943095    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:21.943095    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:22.057711    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:22.443083    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:22.449489    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:22.556626    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:23.004278    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:23.005168    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:23.212366    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:23.219716    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:23.581733    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:23.581958    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:23.582639    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:24.067197    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:24.067908    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:24.069431    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:24.474735    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:24.475217    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:24.673650    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:24.943538    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:24.944285    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:25.257238    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:25.260743    4408 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"False"
	I0917 17:00:25.452779    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:25.453206    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:25.558210    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:25.946223    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:25.951668    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:26.069835    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:26.446020    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:26.446318    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:26.559460    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:26.950198    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:26.952204    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:27.065106    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:27.447016    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:27.449948    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:27.558294    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:27.628910    4408 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace has status "Ready":"True"
	I0917 17:00:27.628910    4408 pod_ready.go:82] duration metric: took 27.0141693s for pod "nvidia-device-plugin-daemonset-2ftgf" in "kube-system" namespace to be "Ready" ...
	I0917 17:00:27.628910    4408 pod_ready.go:39] duration metric: took 50.8096909s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:00:27.628910    4408 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:00:27.647900    4408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:00:27.719806    4408 api_server.go:72] duration metric: took 57.2002323s to wait for apiserver process to appear ...
	I0917 17:00:27.719890    4408 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:00:27.719991    4408 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:53754/healthz ...
	I0917 17:00:27.736569    4408 api_server.go:279] https://127.0.0.1:53754/healthz returned 200:
	ok
	I0917 17:00:27.739880    4408 api_server.go:141] control plane version: v1.31.1
	I0917 17:00:27.739955    4408 api_server.go:131] duration metric: took 20.0648ms to wait for apiserver health ...
	I0917 17:00:27.740011    4408 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:00:27.814936    4408 system_pods.go:59] 18 kube-system pods found
	I0917 17:00:27.814936    4408 system_pods.go:61] "coredns-7c65d6cfc9-vxl6h" [20aab1e4-df34-403c-86be-e9411bb6f1cf] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "csi-hostpath-attacher-0" [080a560a-ba22-4cff-a5b8-dabe12e1e5b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 17:00:27.814936    4408 system_pods.go:61] "csi-hostpath-resizer-0" [dbfd3541-f01e-4cda-a53b-243280b8fd10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 17:00:27.814936    4408 system_pods.go:61] "csi-hostpathplugin-xrs8g" [c04d1f1e-6c48-4a67-8d09-fd65b84995c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 17:00:27.814936    4408 system_pods.go:61] "etcd-addons-000400" [2c435057-5362-421f-8dd7-6a3aef9d309a] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "kube-apiserver-addons-000400" [485e409e-91fb-4fa1-adef-03f369245f39] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "kube-controller-manager-addons-000400" [ab6fdd5b-d7bf-43e0-b1a8-86ff63dfbdf7] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "kube-ingress-dns-minikube" [9ce13c55-6fda-4ae9-9a37-1f5f3cb8b1b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 17:00:27.814936    4408 system_pods.go:61] "kube-proxy-fcj2x" [bfb64a93-338f-4610-ba9a-bc127879b1a5] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "kube-scheduler-addons-000400" [86430a7d-9c04-4df0-86e8-ea0d2b9e8c2d] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "metrics-server-84c5f94fbc-qklw7" [1491f166-0f42-45d4-af2d-89a300617312] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 17:00:27.814936    4408 system_pods.go:61] "nvidia-device-plugin-daemonset-2ftgf" [4de3ed84-6797-4a61-9b52-ef4d7b038511] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "registry-66c9cd494c-kzz9h" [a9a220b4-295f-4642-a504-2a586405571c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 17:00:27.814936    4408 system_pods.go:61] "registry-proxy-jnlf8" [abc14e2f-1004-4195-a33b-94ce29de0c38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 17:00:27.814936    4408 system_pods.go:61] "snapshot-controller-56fcc65765-h7gtd" [2ab0275e-2aa1-4e30-ba10-dd7228b7dc91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:00:27.814936    4408 system_pods.go:61] "snapshot-controller-56fcc65765-nv9hz" [5dec660e-92fd-452b-bd32-013439d0f1b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:00:27.814936    4408 system_pods.go:61] "storage-provisioner" [8225c0c0-c1f6-42cc-991b-41ca1d4efbf0] Running
	I0917 17:00:27.814936    4408 system_pods.go:61] "tiller-deploy-b48cc5f79-m6jxn" [3b64df46-7619-4d49-9a8c-8d2e767f7fad] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 17:00:27.814936    4408 system_pods.go:74] duration metric: took 74.9239ms to wait for pod list to return data ...
	I0917 17:00:27.814936    4408 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:00:27.822379    4408 default_sa.go:45] found service account: "default"
	I0917 17:00:27.822379    4408 default_sa.go:55] duration metric: took 7.4437ms for default service account to be created ...
	I0917 17:00:27.822379    4408 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:00:27.836372    4408 system_pods.go:86] 18 kube-system pods found
	I0917 17:00:27.836372    4408 system_pods.go:89] "coredns-7c65d6cfc9-vxl6h" [20aab1e4-df34-403c-86be-e9411bb6f1cf] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "csi-hostpath-attacher-0" [080a560a-ba22-4cff-a5b8-dabe12e1e5b3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 17:00:27.836372    4408 system_pods.go:89] "csi-hostpath-resizer-0" [dbfd3541-f01e-4cda-a53b-243280b8fd10] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 17:00:27.836372    4408 system_pods.go:89] "csi-hostpathplugin-xrs8g" [c04d1f1e-6c48-4a67-8d09-fd65b84995c8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 17:00:27.836372    4408 system_pods.go:89] "etcd-addons-000400" [2c435057-5362-421f-8dd7-6a3aef9d309a] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "kube-apiserver-addons-000400" [485e409e-91fb-4fa1-adef-03f369245f39] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "kube-controller-manager-addons-000400" [ab6fdd5b-d7bf-43e0-b1a8-86ff63dfbdf7] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "kube-ingress-dns-minikube" [9ce13c55-6fda-4ae9-9a37-1f5f3cb8b1b3] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 17:00:27.836372    4408 system_pods.go:89] "kube-proxy-fcj2x" [bfb64a93-338f-4610-ba9a-bc127879b1a5] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "kube-scheduler-addons-000400" [86430a7d-9c04-4df0-86e8-ea0d2b9e8c2d] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "metrics-server-84c5f94fbc-qklw7" [1491f166-0f42-45d4-af2d-89a300617312] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 17:00:27.836372    4408 system_pods.go:89] "nvidia-device-plugin-daemonset-2ftgf" [4de3ed84-6797-4a61-9b52-ef4d7b038511] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "registry-66c9cd494c-kzz9h" [a9a220b4-295f-4642-a504-2a586405571c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 17:00:27.836372    4408 system_pods.go:89] "registry-proxy-jnlf8" [abc14e2f-1004-4195-a33b-94ce29de0c38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 17:00:27.836372    4408 system_pods.go:89] "snapshot-controller-56fcc65765-h7gtd" [2ab0275e-2aa1-4e30-ba10-dd7228b7dc91] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:00:27.836372    4408 system_pods.go:89] "snapshot-controller-56fcc65765-nv9hz" [5dec660e-92fd-452b-bd32-013439d0f1b4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:00:27.836372    4408 system_pods.go:89] "storage-provisioner" [8225c0c0-c1f6-42cc-991b-41ca1d4efbf0] Running
	I0917 17:00:27.836372    4408 system_pods.go:89] "tiller-deploy-b48cc5f79-m6jxn" [3b64df46-7619-4d49-9a8c-8d2e767f7fad] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0917 17:00:27.836372    4408 system_pods.go:126] duration metric: took 13.9921ms to wait for k8s-apps to be running ...
	I0917 17:00:27.836372    4408 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:00:27.849387    4408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:00:27.918810    4408 system_svc.go:56] duration metric: took 82.4373ms WaitForService to wait for kubelet
	I0917 17:00:27.918810    4408 kubeadm.go:582] duration metric: took 57.3992341s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:00:27.918810    4408 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:00:27.929662    4408 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0917 17:00:27.930603    4408 node_conditions.go:123] node cpu capacity is 16
	I0917 17:00:27.930603    4408 node_conditions.go:105] duration metric: took 11.7931ms to run NodePressure ...
	I0917 17:00:27.930603    4408 start.go:241] waiting for startup goroutines ...
	I0917 17:00:28.004876    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:28.004876    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:28.054886    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:28.508201    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:28.509219    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:28.608780    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:28.944093    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:28.945077    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:29.059081    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:29.445736    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:29.446716    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:29.555710    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:29.943318    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:29.944310    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:30.057321    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:30.507009    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:30.508998    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:30.606142    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:30.943283    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:30.944341    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:31.061982    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:31.445681    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:31.445712    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:31.559428    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:31.943151    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:31.943638    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:32.057903    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:32.443823    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:32.444332    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:32.557958    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:32.944230    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:32.944862    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:33.057217    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:33.445626    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:33.447991    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:33.560172    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:33.955579    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:33.956291    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:34.060984    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:34.442946    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:34.445433    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:34.557916    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:34.943526    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:34.943732    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:35.060998    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:35.448736    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:35.448736    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:35.560211    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:35.943708    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:35.943708    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:36.056707    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:36.441393    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:36.513215    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:36.606232    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:36.951989    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:36.952292    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:37.057257    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:37.443178    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:37.445173    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:37.556161    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:37.954920    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:37.956882    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:38.062392    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:38.446967    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:38.446967    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:38.558965    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:38.943666    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:38.944657    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:39.107470    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:39.505293    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:39.506207    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:39.606922    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:39.943503    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:39.946824    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:40.060861    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:40.443548    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:40.444233    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:40.557939    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:40.943020    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:40.943704    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:41.054097    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:41.444804    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:41.445490    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:41.562766    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:41.943448    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:41.944551    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:42.057512    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:42.443757    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:42.444306    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:42.555685    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:42.943688    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:42.944177    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:43.057650    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:43.443878    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:43.444903    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:43.566989    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:43.944302    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:43.944735    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:44.058592    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:44.443352    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:44.443996    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:44.556958    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:44.943459    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:44.943496    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:45.056932    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:45.445494    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:45.447141    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:45.556536    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:45.945379    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:45.945922    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:46.060685    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:46.442902    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:46.443095    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:46.557065    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:46.943454    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:46.943792    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:47.055856    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:47.444352    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:47.444508    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:47.557844    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:47.944790    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:47.945418    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:48.059019    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:48.443439    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:48.444629    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:48.557527    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:48.945712    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:48.946271    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:49.103918    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:49.442064    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:49.442064    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:49.556897    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:49.947477    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:49.947477    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:50.058068    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:50.442437    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:50.444146    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:50.554788    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:50.943975    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:50.944601    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:51.055837    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:51.443357    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:51.444235    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:51.559176    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:51.945386    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:51.945706    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:52.059914    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:52.446027    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:52.446272    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:52.556216    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:52.945782    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:52.949423    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:53.056218    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:53.444495    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:53.444495    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:53.556483    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:53.943964    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:53.944448    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:54.056299    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:54.444365    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:54.445158    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:54.556620    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:54.944125    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:54.944847    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:55.056339    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:55.444446    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:55.445838    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:55.558418    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:55.944707    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:55.946187    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:56.056874    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:56.446452    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:56.447599    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:56.556630    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:56.944754    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:56.945164    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:57.058010    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:57.444822    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:57.445004    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:57.556252    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:57.944826    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:57.945808    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:58.058847    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:58.442611    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:58.444612    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:58.555653    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:58.943243    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:58.944104    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:59.056625    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:59.442468    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:59.444821    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:00:59.556666    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:00:59.945896    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:00:59.947263    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:00.059170    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:00.443724    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:00.443879    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:00.557750    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:00.943749    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:00.943749    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:01.056743    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:01.447526    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:01.449190    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:01.558621    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:01.944328    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:01.944328    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:02.058465    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:02.443467    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:02.445122    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:02.556384    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:02.967232    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:02.968837    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:03.059985    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:03.442417    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:03.442945    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:03.556569    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:04.045426    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:04.048854    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:04.216278    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:05.538197    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:05.541693    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:05.541876    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:05.549472    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:05.551994    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:05.553155    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:05.561727    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:05.947014    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:05.947086    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:06.058218    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:06.442924    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:06.443935    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:06.603934    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:06.944987    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:06.945595    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:07.097082    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:07.446103    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:07.450648    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:07.557951    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:07.944146    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:07.944440    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:08.056983    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:08.445379    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:08.445927    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:08.560474    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:08.942864    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:08.943694    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:09.056463    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:09.443929    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:09.444228    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:09.558276    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:09.946093    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:09.946208    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:10.058430    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:10.444233    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:10.444233    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:10.650742    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:10.944749    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:10.944749    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:11.098819    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:11.443655    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:11.444645    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:11.615964    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:11.942995    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:11.945558    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:12.059870    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:12.444790    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:12.444790    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:12.559817    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:12.944031    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:12.952155    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:13.058835    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:13.444160    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:13.444878    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:13.558636    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:13.950993    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:13.951759    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:14.055990    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:14.445453    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:14.447715    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:14.556750    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:14.943782    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:14.945468    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:15.056980    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:15.443052    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:15.443696    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:15.556575    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:15.942664    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:15.942664    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:16.055620    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:16.448319    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:16.448319    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:16.557240    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:17.012720    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:17.012880    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:17.058123    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:17.445662    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:17.445662    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:17.555668    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:17.946025    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:17.952151    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:18.063487    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:18.448157    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:18.449649    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:18.558874    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:18.943752    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:18.943836    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:19.060132    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:19.500071    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:19.500728    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:19.602714    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:19.944358    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:19.944358    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:20.058198    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:20.445428    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:20.446751    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:20.558038    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:20.952393    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:20.953391    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:21.059676    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:21.499815    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:21.499815    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:21.606187    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:21.996916    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:21.997474    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:22.096590    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:22.449617    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:22.449617    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:22.602623    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:22.944990    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:22.944990    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:23.060266    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:23.443842    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:23.444585    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:23.558256    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:23.998447    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:23.999580    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:24.105498    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:24.449009    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:24.449787    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:24.556858    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:24.945517    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:24.945517    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:25.059512    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:25.445546    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:25.453524    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:25.556527    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:25.953564    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:25.954522    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:26.058783    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:26.444784    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:26.445914    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:26.556638    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:26.944624    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:26.944624    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:27.057574    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:27.444593    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:27.446687    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:27.558600    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:27.944566    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:27.944566    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:28.059602    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:28.498712    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:28.498712    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:28.599708    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:28.945618    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:28.946619    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:29.097803    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:29.446238    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:29.447332    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:29.558266    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:29.944438    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:29.945177    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:30.059157    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:30.455432    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:30.456383    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:30.557945    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:30.945467    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:30.945467    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:31.055586    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:31.443179    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:31.443953    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:31.557324    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:31.945061    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:31.945675    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:32.056743    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:32.448199    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:32.448372    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:32.557248    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:32.945717    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:32.948450    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:33.055974    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:33.445625    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:33.446443    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:33.557927    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:33.950465    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:33.950465    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:34.058851    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:34.438650    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:34.438650    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:34.557720    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:34.944488    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:34.944561    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:35.058276    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:35.447266    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:35.449872    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:35.558478    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:35.942342    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:35.996872    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:36.057983    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:36.445034    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:36.446011    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:36.560712    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:36.948170    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:36.948830    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:37.058168    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:37.444128    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:37.444852    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:37.558899    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:37.943099    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:37.943099    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:38.057872    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:38.442879    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:38.447175    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:38.557898    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:38.944003    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:38.944003    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:39.058270    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:39.442844    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:39.443315    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:39.558168    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:39.943419    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:39.944209    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:40.058173    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:40.444226    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:40.444476    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:40.565153    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:40.945418    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:40.945870    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:41.058059    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:41.445063    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:41.445423    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:41.556895    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:41.943115    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:41.943115    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:42.060445    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:42.443463    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:42.444521    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:42.666345    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:42.944161    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:42.944161    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:43.057404    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:43.445659    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:01:43.445897    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:43.557538    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:43.946232    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:43.946232    4408 kapi.go:107] duration metric: took 1m42.0100703s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 17:01:44.056226    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:44.442761    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:44.595755    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:44.995787    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:45.097717    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:45.445745    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:45.558509    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:45.944197    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:46.059240    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:46.445637    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:46.557738    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:46.947826    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:47.057746    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:47.445352    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:47.556342    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:47.944339    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:48.056368    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:48.444419    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:48.557627    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:48.946664    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:49.065880    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:49.445744    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:49.558336    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:49.945129    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:50.099120    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:50.443699    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:50.556897    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:50.945447    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:51.058017    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:51.444252    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:51.557420    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:51.942808    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:52.058063    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:52.443556    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:52.557409    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:52.951189    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:53.057579    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:53.443887    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:53.557787    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:53.965623    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:54.162591    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:54.445569    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:54.556744    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:54.944663    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:55.173045    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:55.447382    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:55.556231    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:55.997262    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:56.098257    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:56.494263    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:56.694267    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:57.006262    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:57.096257    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:57.444633    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:57.596610    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:57.996694    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:58.096739    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:58.495133    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:58.597149    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:58.995162    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:59.097181    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:59.497237    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:01:59.596181    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:01:59.997020    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:00.108136    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:00.492526    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:00.558520    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:00.945760    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:01.574305    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:01.574593    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:01.586556    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:01.944949    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:02.056665    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:02.444990    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:02.559471    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:02.944881    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:03.061357    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:03.445619    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:03.558848    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:03.943534    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:04.057994    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:04.445107    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:04.558369    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:04.944759    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:05.093464    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:05.444201    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:05.556976    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:05.944764    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:06.093234    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:06.440890    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:06.558211    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:06.946391    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:07.058830    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:07.444850    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:07.557817    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:07.955871    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:08.368384    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:08.444623    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:08.557492    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:08.944565    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:09.060320    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:09.568246    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:09.568246    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:09.943531    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:10.059244    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:10.456986    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:10.609859    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:10.945203    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:11.054927    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:11.494022    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:11.617007    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:11.943006    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:12.056998    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:12.445170    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:12.595175    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:12.948189    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:13.096818    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:13.495941    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:13.597172    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:13.943524    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:14.057421    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:14.443727    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:14.570249    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:14.943349    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:15.061354    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:15.445999    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:15.558222    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:15.944765    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:16.091807    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:16.444903    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:16.557712    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:16.945495    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:17.057586    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:17.445605    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:17.557639    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:17.944169    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:18.057673    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:18.444488    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:18.559317    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:18.943435    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:19.090452    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:19.493464    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:19.600441    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:19.994696    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:20.095666    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:20.447688    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:20.558681    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:20.990394    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:21.061848    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:21.448185    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:21.594659    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:21.943612    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:22.060280    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:22.445719    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:22.595605    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:22.946496    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:23.058992    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:23.444953    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:23.594159    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:23.945108    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:24.059261    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:24.445471    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:24.559037    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:24.992749    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:25.097548    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:25.443320    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:25.559469    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:25.946013    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:26.060142    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:26.444130    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:26.557146    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:26.943756    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:27.057742    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:27.444317    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:27.558184    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:27.946455    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:28.056693    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:28.446993    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:28.562835    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:28.954618    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:29.058840    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:29.446033    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:29.557908    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:30.095788    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:30.097212    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:30.447042    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:30.556529    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:30.952134    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:31.057837    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:31.450817    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:31.560862    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:31.946817    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:32.056816    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:32.445708    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:32.558810    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:32.944960    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:33.058591    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:33.446378    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:33.558925    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:33.982148    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:34.217179    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:34.445269    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:34.588731    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:34.994316    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:35.090460    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:35.447937    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:35.556658    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:35.944845    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:36.092501    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:36.447462    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:36.592540    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:36.949748    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:37.056973    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:37.490814    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:37.556773    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:37.942752    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:38.055897    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:38.501518    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:38.588819    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:39.020872    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:39.199251    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:39.492286    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:39.593780    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:39.946889    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:40.088183    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:40.495343    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:40.596529    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:40.949980    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:41.060717    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:41.447354    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:41.555898    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:41.945737    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:42.058201    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:42.445657    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:42.557096    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:42.957790    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:43.060755    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:43.444746    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:43.560699    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:43.949034    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:44.056982    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:44.444546    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:44.557565    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:44.945540    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:45.057565    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:45.446648    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:45.559503    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:45.948182    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:46.090421    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:46.446550    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:46.557639    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:46.945394    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:47.058769    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:47.446091    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:47.563363    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:47.943263    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:48.060973    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:48.444495    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:48.557463    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:48.949062    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:49.061071    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:49.449064    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:49.558065    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:49.945086    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:50.088247    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:50.445651    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:50.559403    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:50.948440    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:51.090658    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:51.493138    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:51.591872    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:52.021448    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:52.247717    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:52.445993    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:52.599080    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:52.949288    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:53.057877    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:53.444752    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:53.558768    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:53.945558    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:54.060574    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:54.446566    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:54.562844    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:54.985477    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:55.057811    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:55.445943    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:55.558482    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:55.945674    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:56.091682    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:56.445273    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:56.561788    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:56.944882    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:57.064007    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:57.487208    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:57.558672    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:57.948053    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:58.087500    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:58.450997    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:58.559798    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:59.011235    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:59.094002    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:59.444753    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:02:59.561379    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:02:59.987339    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:00.090178    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:00.488341    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:00.588583    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:00.993535    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:01.106000    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:01.488731    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:01.590054    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:01.988203    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:02.087123    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:02.445730    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:02.558988    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:02.945304    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:03.059941    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:03.447664    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:03.559412    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:03.945315    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:04.058399    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:04.447080    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:04.560825    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:04.946476    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:05.057426    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:05.486773    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:05.589848    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:05.944588    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:06.085584    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:06.448520    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:06.558079    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:06.945267    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:07.059100    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:07.446452    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:07.559568    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:07.944140    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:08.059139    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:08.445808    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:08.588778    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:08.944332    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:09.058930    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:09.503340    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:09.588327    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:09.945770    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:10.081847    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:10.490887    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:10.558536    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:10.950506    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:11.057361    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:11.447895    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:11.559399    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:11.947168    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:12.093115    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:12.446441    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:12.557437    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:12.945980    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:13.059996    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:13.444346    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:13.587523    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:13.982663    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:14.083629    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:14.444778    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:14.583280    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:14.945714    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:15.057955    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:15.503638    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:15.559863    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:15.947470    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:16.059111    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:16.444429    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:16.558277    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:16.945776    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:17.057773    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:17.446202    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:17.557142    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:17.945170    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:18.059127    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:18.447154    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:18.583191    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:18.944956    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:19.085208    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:19.443552    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:19.586067    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:19.945643    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:20.057657    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:20.456311    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:20.564449    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:03:20.945806    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:21.057688    4408 kapi.go:107] duration metric: took 3m17.0088503s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 17:03:21.445248    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:21.946945    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:22.444048    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:22.949001    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:23.445375    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:23.945180    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:24.485756    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:24.946627    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:25.444143    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:25.944512    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:26.444346    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:26.946776    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:27.444342    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:27.944784    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:28.445016    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:28.944892    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:29.444044    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:29.944646    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:30.444411    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:30.946579    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:31.444221    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:31.945590    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:32.445011    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:32.946618    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:33.446646    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:33.945395    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:34.444560    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:34.946708    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:35.443937    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:35.948762    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:36.443964    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:36.945570    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:37.444281    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:37.944349    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:38.444914    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:38.945839    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:39.445568    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:39.944829    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:40.445906    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:40.946043    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:41.445317    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:41.944836    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:42.445509    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:42.945745    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:43.445159    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:43.945651    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:44.444896    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:44.945351    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:45.445785    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:45.944686    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:46.448438    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:46.945865    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:47.445575    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:47.945737    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:48.446077    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:48.947747    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:49.446320    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:49.944470    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:50.444930    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:50.951275    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:51.445833    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:51.944285    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:52.445481    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:52.981729    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:53.444271    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:53.945148    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:54.482883    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:54.945205    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:55.445053    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:55.945244    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:56.444086    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:56.948053    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:57.443966    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:57.946435    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:58.445423    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:58.952094    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:59.445788    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:03:59.946644    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:00.480962    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:00.945552    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:01.445238    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:01.946447    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:02.448160    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:02.948327    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:03.446195    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:03.945013    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:04.444996    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:04.945166    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:05.447060    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:05.946425    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:06.444564    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:06.945398    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:07.447943    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:07.947364    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:08.450950    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:08.947722    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:09.476718    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:09.946700    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:10.446472    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:10.957530    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:11.444947    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:11.978966    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:12.556092    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:12.995631    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:13.481898    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:13.980928    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:14.486193    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:14.977347    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:15.479771    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:16.005592    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:16.479487    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:16.981504    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:17.486233    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:17.975681    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:18.479838    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:18.978298    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:19.479950    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:19.946126    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:20.446047    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:20.945358    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:21.476347    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:21.944254    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:22.476006    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:22.947968    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:23.474929    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:23.948992    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:24.478323    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:24.946601    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:25.475954    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:25.981927    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:26.475045    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:26.946615    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:27.445230    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:27.945276    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:28.448452    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:28.945057    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:29.860568    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:29.945805    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:30.523408    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:30.945690    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:31.446149    4408 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:04:31.945633    4408 kapi.go:107] duration metric: took 4m30.0093968s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 17:05:36.550024    4408 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 17:05:36.550024    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:37.047682    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:37.548140    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:38.047628    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:38.547954    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:39.065900    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:39.565116    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:40.051489    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:40.551569    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:41.048927    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:41.550969    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:42.074123    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:42.563181    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:43.049405    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:43.549959    4408 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:05:44.078619    4408 kapi.go:107] duration metric: took 5m35.5386941s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 17:05:44.082413    4408 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-000400 cluster.
	I0917 17:05:44.092970    4408 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 17:05:44.095795    4408 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 17:05:44.099450    4408 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner-rancher, storage-provisioner, volcano, ingress-dns, cloud-spanner, helm-tiller, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 17:05:44.102696    4408 addons.go:510] duration metric: took 6m13.5804952s for enable addons: enabled=[nvidia-device-plugin storage-provisioner-rancher storage-provisioner volcano ingress-dns cloud-spanner helm-tiller metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 17:05:44.102765    4408 start.go:246] waiting for cluster config update ...
	I0917 17:05:44.102892    4408 start.go:255] writing updated cluster config ...
	I0917 17:05:44.117063    4408 ssh_runner.go:195] Run: rm -f paused
	I0917 17:05:44.430836    4408 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 17:05:44.433382    4408 out.go:177] * Done! kubectl is now configured to use "addons-000400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:15:46 addons-000400 dockerd[1371]: time="2024-09-17T17:15:46.205747487Z" level=info msg="ignoring event" container=c851fab010072242d0bd7d23dc6cb2a7b0dc2b7f1cf1f649ca01edb47779d8b0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:46 addons-000400 dockerd[1371]: time="2024-09-17T17:15:46.210296757Z" level=info msg="ignoring event" container=b67ba0c26e46af2828006f6ccb771b6353f881486c383511d6f73a9781628a08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:46 addons-000400 dockerd[1371]: time="2024-09-17T17:15:46.292795909Z" level=info msg="ignoring event" container=12b1f5e939df899ffb9689703e6c3ead0b3928876db4a6e5d3f3db5e36f5bec9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:46 addons-000400 dockerd[1371]: time="2024-09-17T17:15:46.294633440Z" level=info msg="ignoring event" container=a816e3a843368996de363a9a170898384c40acd5e8d44b46d43d73a2f1607c94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:46 addons-000400 dockerd[1371]: time="2024-09-17T17:15:46.716711600Z" level=info msg="ignoring event" container=40917b20823644e19d483312b04ed12a13ed92540c20364cf8caa040fb397719 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:47 addons-000400 dockerd[1371]: time="2024-09-17T17:15:47.115041682Z" level=info msg="ignoring event" container=dc6b1a12908eed12016f7a81dc05df971caa47bda0399d7973a103e61fd3489b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:47 addons-000400 cri-dockerd[1643]: time="2024-09-17T17:15:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-attacher-0_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 17 17:15:47 addons-000400 dockerd[1371]: time="2024-09-17T17:15:47.409701854Z" level=info msg="ignoring event" container=7deea99a3b4ea721d7183324aea7706096d4c3ba19c7a61a016258f93d1511e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:47 addons-000400 dockerd[1371]: time="2024-09-17T17:15:47.516078402Z" level=info msg="ignoring event" container=5bf5e6e664ca443ec65c2c0d6e28c36f86df8503e539174218e80eaa4beca052 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:51 addons-000400 cri-dockerd[1643]: time="2024-09-17T17:15:51Z" level=error msg="error getting RW layer size for container ID 'b80351ef14a1c7f517dd22449af1d7d81828dd401624d5a6bfbf03c6c030eaf1': Error response from daemon: No such container: b80351ef14a1c7f517dd22449af1d7d81828dd401624d5a6bfbf03c6c030eaf1"
	Sep 17 17:15:51 addons-000400 cri-dockerd[1643]: time="2024-09-17T17:15:51Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b80351ef14a1c7f517dd22449af1d7d81828dd401624d5a6bfbf03c6c030eaf1'"
	Sep 17 17:15:53 addons-000400 dockerd[1371]: time="2024-09-17T17:15:53.079935090Z" level=info msg="ignoring event" container=233322cb6d60f99c359cbdc6b47c92991572b5aa177b99120482ba1582f4dee2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:53 addons-000400 dockerd[1371]: time="2024-09-17T17:15:53.593998608Z" level=info msg="ignoring event" container=a911acea7b042176ddc831df1cf0f946231530c30624a0e21840ab67f120f4b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:54 addons-000400 dockerd[1371]: time="2024-09-17T17:15:54.299277770Z" level=info msg="ignoring event" container=05cbbe6ff8e9be3cd34a3518e4fc34166f9733144182e25cbd2f32fa0633c938 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:54 addons-000400 dockerd[1371]: time="2024-09-17T17:15:54.312236319Z" level=info msg="ignoring event" container=42d392e592f933e607d98c6bb32330a2fbe21319a22731f6ca9cff08b752e34b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:55 addons-000400 dockerd[1371]: time="2024-09-17T17:15:55.196763397Z" level=info msg="ignoring event" container=3c56d5913c08e7ddb5582a5fe8e30dbe737cfc68dcaed2d5f9fcf2d5340c70ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:55 addons-000400 dockerd[1371]: time="2024-09-17T17:15:55.511902748Z" level=info msg="ignoring event" container=3b14b417b775207ac0aa025a9b082b18b54c2575280eb4e591b10d77cd2b1614 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:15:55 addons-000400 dockerd[1371]: time="2024-09-17T17:15:55.690547991Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:15:55 addons-000400 dockerd[1371]: time="2024-09-17T17:15:55.713453828Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 17 17:15:56 addons-000400 cri-dockerd[1643]: time="2024-09-17T17:15:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/832ff550e2af3671ba191440021ca838cfc798ec65d32c8a2eb7deecc40e4ad8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 17 17:15:59 addons-000400 dockerd[1371]: time="2024-09-17T17:15:59.801662739Z" level=info msg="ignoring event" container=644b2785aef389351ad4ee937340e4e08fc16b5a2f33f040897bb377c2463f76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:16:03 addons-000400 dockerd[1371]: time="2024-09-17T17:16:03.304465929Z" level=info msg="ignoring event" container=cffb078e2025b7c07384223b2fa38e55262e0ca9afe29595bef1c369695693dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:16:03 addons-000400 dockerd[1371]: time="2024-09-17T17:16:03.709664162Z" level=info msg="ignoring event" container=6330d6f050345cb6c972f362bbdd2fda5d9c649cd00443505a08c6dcbc398182 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:16:04 addons-000400 dockerd[1371]: time="2024-09-17T17:16:04.604608555Z" level=info msg="ignoring event" container=edd47c21d46be64996786a50626b5a3e5c7cb2eb2f789053259afe87bcbf3c51 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:16:06 addons-000400 cri-dockerd[1643]: time="2024-09-17T17:16:06Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	6d99f0eec3341       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                Less than a second ago   Created             nginx                      0                   832ff550e2af3       nginx
	ff2189915cff2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago           Running             gcp-auth                   0                   20d059749426a       gcp-auth-89d5ffd79-wrnsx
	169fd76d837c5       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                 0                   c05b55534e768       ingress-nginx-controller-bc57996ff-v2jws
	80fbfbebe11e8       ce263a8653f9c                                                                                                                13 minutes ago           Exited              patch                      1                   7983327b8c05b       ingress-nginx-admission-patch-jg46w
	98a6ef631f4e9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   13 minutes ago           Exited              create                     0                   3fd4613c878f5       ingress-nginx-admission-create-mtsrj
	c1b74331b3e31       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              14 minutes ago           Running             registry-proxy             0                   49bf58568a814       registry-proxy-jnlf8
	cf317d67914c6       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             14 minutes ago           Running             registry                   0                   77c55667a20c8       registry-66c9cd494c-kzz9h
	0e107b85af94a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             15 minutes ago           Running             minikube-ingress-dns       0                   d744ed8a0ee4c       kube-ingress-dns-minikube
	78a0c7b2bb7e3       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     15 minutes ago           Running             nvidia-device-plugin-ctr   0                   51ae88a41a392       nvidia-device-plugin-daemonset-2ftgf
	d42a32ceab52d       6e38f40d628db                                                                                                                16 minutes ago           Running             storage-provisioner        0                   46dfe3b8c0e8c       storage-provisioner
	25804c7b6ff61       c69fa2e9cbf5f                                                                                                                16 minutes ago           Running             coredns                    0                   85d03166ca284       coredns-7c65d6cfc9-vxl6h
	d622d2b08b7b2       60c005f310ff3                                                                                                                16 minutes ago           Running             kube-proxy                 0                   9f95eb06aef45       kube-proxy-fcj2x
	428c8009479e4       175ffd71cce3d                                                                                                                16 minutes ago           Running             kube-controller-manager    0                   db060efa73ec1       kube-controller-manager-addons-000400
	236eb8c1333a9       9aa1fad941575                                                                                                                16 minutes ago           Running             kube-scheduler             0                   d023274c09ce0       kube-scheduler-addons-000400
	ea8f24f4c56f9       6bab7719df100                                                                                                                16 minutes ago           Running             kube-apiserver             0                   ad1a5ee472093       kube-apiserver-addons-000400
	01f7aa1ef1a07       2e96e5913fc06                                                                                                                16 minutes ago           Running             etcd                       0                   ebeb42b7afb6e       etcd-addons-000400
	
	
	==> controller_ingress [169fd76d837c] <==
	I0917 17:04:31.335165       8 nginx.go:271] "Starting NGINX Ingress controller"
	I0917 17:04:31.345590       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f5179e71-9791-4edc-867a-5b2a2f4e938b", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0917 17:04:31.368946       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"461e7891-c4df-419d-9272-99660540d3f6", APIVersion:"v1", ResourceVersion:"714", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0917 17:04:31.369139       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"278932c7-470c-46ac-8f04-66721f10b02f", APIVersion:"v1", ResourceVersion:"721", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0917 17:04:32.537955       8 nginx.go:317] "Starting NGINX process"
	I0917 17:04:32.538078       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0917 17:04:32.538941       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0917 17:04:32.539366       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 17:04:32.557451       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0917 17:04:32.557560       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-v2jws"
	I0917 17:04:32.563256       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-v2jws" node="addons-000400"
	I0917 17:04:32.598322       8 controller.go:213] "Backend successfully reloaded"
	I0917 17:04:32.598553       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0917 17:04:32.598785       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-v2jws", UID:"4d68830c-bb79-4c64-a5b4-42cbbe0d0dd2", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0917 17:15:55.509853       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0917 17:15:55.620411       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.111s renderingIngressLength:1 renderingIngressTime:0.002s admissionTime:0.113s testedConfigurationSize:18.1kB}
	I0917 17:15:55.620571       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0917 17:15:55.700314       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0917 17:15:55.701458       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"e152e657-8334-4279-8b17-29484e6bb411", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3218", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0917 17:15:57.538573       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0917 17:15:57.538767       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0917 17:15:57.599637       8 controller.go:213] "Backend successfully reloaded"
	I0917 17:15:57.600373       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-v2jws", UID:"4d68830c-bb79-4c64-a5b4-42cbbe0d0dd2", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0917 17:16:00.871898       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	W0917 17:16:04.205611       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [25804c7b6ff6] <==
	[INFO] 127.0.0.1:40818 - 46426 "HINFO IN 1292894298834207576.371881477665449335. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.059493972s
	[INFO] 10.244.0.9:47115 - 48769 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000661088s
	[INFO] 10.244.0.9:47115 - 35469 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000827411s
	[INFO] 10.244.0.9:41470 - 30144 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000325343s
	[INFO] 10.244.0.9:41470 - 49349 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000347647s
	[INFO] 10.244.0.9:60956 - 7325 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00029884s
	[INFO] 10.244.0.9:60956 - 55186 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271137s
	[INFO] 10.244.0.9:45906 - 64989 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000222729s
	[INFO] 10.244.0.9:45906 - 23507 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000560275s
	[INFO] 10.244.0.9:57273 - 44922 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000188825s
	[INFO] 10.244.0.9:57273 - 50807 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000323844s
	[INFO] 10.244.0.9:35331 - 17521 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000171423s
	[INFO] 10.244.0.9:35331 - 36212 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067709s
	[INFO] 10.244.0.9:56042 - 61127 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000184824s
	[INFO] 10.244.0.9:56042 - 42946 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000213328s
	[INFO] 10.244.0.9:40889 - 56628 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000182324s
	[INFO] 10.244.0.9:40889 - 46902 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220829s
	[INFO] 10.244.0.26:33033 - 16174 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000538572s
	[INFO] 10.244.0.26:50939 - 2367 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000771103s
	[INFO] 10.244.0.26:55685 - 21659 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000235531s
	[INFO] 10.244.0.26:55366 - 7173 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00022253s
	[INFO] 10.244.0.26:41875 - 13473 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000327144s
	[INFO] 10.244.0.26:52466 - 4608 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000877717s
	[INFO] 10.244.0.26:44607 - 38463 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.016324976s
	[INFO] 10.244.0.26:49115 - 64697 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.016520502s
	
	
	==> describe nodes <==
	Name:               addons-000400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-000400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-000400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T16_59_25_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-000400
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 16:59:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-000400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:15:56 +0000   Tue, 17 Sep 2024 16:59:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:15:56 +0000   Tue, 17 Sep 2024 16:59:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:15:56 +0000   Tue, 17 Sep 2024 16:59:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:15:56 +0000   Tue, 17 Sep 2024 16:59:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-000400
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	System Info:
	  Machine ID:                 24db0c12263545f09926c4653c5481bb
	  System UUID:                24db0c12263545f09926c4653c5481bb
	  Boot ID:                    4eef06a3-6868-4ec2-9bef-e08441d95637
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  gcp-auth                    gcp-auth-89d5ffd79-wrnsx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-v2jws    100m (0%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-vxl6h                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-000400                          100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-000400                250m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-000400       200m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-fcj2x                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-000400                100m (0%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 nvidia-device-plugin-daemonset-2ftgf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-66c9cd494c-kzz9h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-proxy-jnlf8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age                From             Message
	  ----     ------                             ----               ----             -------
	  Normal   Starting                           16m                kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m                kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory            16m (x7 over 16m)  kubelet          Node addons-000400 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m (x7 over 16m)  kubelet          Node addons-000400 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m (x7 over 16m)  kubelet          Node addons-000400 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m                kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            16m                kubelet          Node addons-000400 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m                kubelet          Node addons-000400 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m                kubelet          Node addons-000400 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     16m                node-controller  Node addons-000400 event: Registered Node addons-000400 in Controller
	
	
	==> dmesg <==
	[  +0.001383] FS-Cache: N-cookie d=000000002a2722ca{9P.session} n=00000000382fe14c
	[  +0.001438] FS-Cache: N-key=[10] '34323934393337353632'
	[  +0.041732] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000005]  failed 2
	[  +0.023466] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.475040] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +2.535593] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002693] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002713] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003812] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000004]  failed 2
	[  +0.007000] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002357] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004204] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.003007] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.072558] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.117094] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.938742] netlink: 'init': attribute type 4 has an invalid length.
	[Sep17 16:32] tmpfs: Unknown parameter 'noswap'
	[ +15.280320] tmpfs: Unknown parameter 'noswap'
	[Sep17 16:59] tmpfs: Unknown parameter 'noswap'
	[  +9.522523] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [01f7aa1ef1a0] <==
	{"level":"info","ts":"2024-09-17T17:06:33.574276Z","caller":"traceutil/trace.go:171","msg":"trace[1855056878] range","detail":"{range_begin:/registry/pods/volcano-system/volcano-scheduler-576bc46687-flh4h; range_end:; response_count:1; response_revision:1914; }","duration":"105.189919ms","start":"2024-09-17T17:06:33.469064Z","end":"2024-09-17T17:06:33.574254Z","steps":["trace[1855056878] 'agreement among raft nodes before linearized reading'  (duration: 104.369214ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:06:33.865619Z","caller":"traceutil/trace.go:171","msg":"trace[664340972] transaction","detail":"{read_only:false; response_revision:1917; number_of_response:1; }","duration":"201.798243ms","start":"2024-09-17T17:06:33.663792Z","end":"2024-09-17T17:06:33.865590Z","steps":["trace[664340972] 'process raft request'  (duration: 190.590614ms)","trace[664340972] 'compare'  (duration: 10.715567ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-17T17:06:35.058578Z","caller":"traceutil/trace.go:171","msg":"trace[1701735086] transaction","detail":"{read_only:false; response_revision:1943; number_of_response:1; }","duration":"204.399175ms","start":"2024-09-17T17:06:34.854107Z","end":"2024-09-17T17:06:35.058551Z","steps":["trace[1701735086] 'process raft request'  (duration: 199.358932ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:09:18.894379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1571}
	{"level":"info","ts":"2024-09-17T17:09:18.955562Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1571,"took":"60.4802ms","hash":519820018,"current-db-size-bytes":9691136,"current-db-size":"9.7 MB","current-db-size-in-use-bytes":5472256,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2024-09-17T17:09:18.955681Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":519820018,"revision":1571,"compact-revision":-1}
	{"level":"info","ts":"2024-09-17T17:14:18.876564Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2264}
	{"level":"info","ts":"2024-09-17T17:14:18.920815Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2264,"took":"43.603185ms","hash":2793488944,"current-db-size-bytes":9691136,"current-db-size":"9.7 MB","current-db-size-in-use-bytes":3850240,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-17T17:14:18.920928Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2793488944,"revision":2264,"compact-revision":1571}
	{"level":"warn","ts":"2024-09-17T17:15:09.469151Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.021915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031960163729506 > lease_revoke:<id:70cc9200ec94bf15>","response":"size:29"}
	{"level":"info","ts":"2024-09-17T17:15:09.469506Z","caller":"traceutil/trace.go:171","msg":"trace[1006988451] linearizableReadLoop","detail":"{readStateIndex:3082; appliedIndex:3081; }","duration":"321.337219ms","start":"2024-09-17T17:15:09.148156Z","end":"2024-09-17T17:15:09.469494Z","steps":["trace[1006988451] 'read index received'  (duration: 145.914754ms)","trace[1006988451] 'applied index is now lower than readState.Index'  (duration: 175.420765ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T17:15:09.469596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"321.507341ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T17:15:09.469619Z","caller":"traceutil/trace.go:171","msg":"trace[1359780393] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2856; }","duration":"321.537645ms","start":"2024-09-17T17:15:09.148031Z","end":"2024-09-17T17:15:09.469610Z","steps":["trace[1359780393] 'agreement among raft nodes before linearized reading'  (duration: 321.488339ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:15:09.469640Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-17T17:15:09.147814Z","time spent":"321.82018ms","remote":"127.0.0.1:52428","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-09-17T17:15:09.469628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"268.108524ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T17:15:09.469682Z","caller":"traceutil/trace.go:171","msg":"trace[1965954717] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2856; }","duration":"268.16183ms","start":"2024-09-17T17:15:09.201508Z","end":"2024-09-17T17:15:09.469670Z","steps":["trace[1965954717] 'agreement among raft nodes before linearized reading'  (duration: 268.096522ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:15:13.713546Z","caller":"traceutil/trace.go:171","msg":"trace[1208509108] transaction","detail":"{read_only:false; response_revision:2871; number_of_response:1; }","duration":"109.707901ms","start":"2024-09-17T17:15:13.603813Z","end":"2024-09-17T17:15:13.713521Z","steps":["trace[1208509108] 'process raft request'  (duration: 109.094323ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:15:46.391470Z","caller":"traceutil/trace.go:171","msg":"trace[312645290] transaction","detail":"{read_only:false; response_revision:3139; number_of_response:1; }","duration":"168.492242ms","start":"2024-09-17T17:15:46.222953Z","end":"2024-09-17T17:15:46.391445Z","steps":["trace[312645290] 'process raft request'  (duration: 168.424234ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:15:46.391873Z","caller":"traceutil/trace.go:171","msg":"trace[86248278] transaction","detail":"{read_only:false; response_revision:3138; number_of_response:1; }","duration":"184.751182ms","start":"2024-09-17T17:15:46.207109Z","end":"2024-09-17T17:15:46.391860Z","steps":["trace[86248278] 'process raft request'  (duration: 184.126604ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:15:46.392371Z","caller":"traceutil/trace.go:171","msg":"trace[134952647] linearizableReadLoop","detail":"{readStateIndex:3378; appliedIndex:3376; }","duration":"101.414826ms","start":"2024-09-17T17:15:46.290941Z","end":"2024-09-17T17:15:46.392356Z","steps":["trace[134952647] 'read index received'  (duration: 100.205974ms)","trace[134952647] 'applied index is now lower than readState.Index'  (duration: 1.207351ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-17T17:15:46.392488Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.491924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T17:15:46.392525Z","caller":"traceutil/trace.go:171","msg":"trace[1315850519] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:3139; }","duration":"183.541331ms","start":"2024-09-17T17:15:46.208974Z","end":"2024-09-17T17:15:46.392515Z","steps":["trace[1315850519] 'agreement among raft nodes before linearized reading'  (duration: 183.466021ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-17T17:15:46.417057Z","caller":"traceutil/trace.go:171","msg":"trace[1198637641] transaction","detail":"{read_only:false; response_revision:3140; number_of_response:1; }","duration":"123.569605ms","start":"2024-09-17T17:15:46.293472Z","end":"2024-09-17T17:15:46.417042Z","steps":["trace[1198637641] 'process raft request'  (duration: 123.439789ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-17T17:15:46.417404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.369625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-17T17:15:46.417433Z","caller":"traceutil/trace.go:171","msg":"trace[1440837531] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:3140; }","duration":"113.40783ms","start":"2024-09-17T17:15:46.304016Z","end":"2024-09-17T17:15:46.417424Z","steps":["trace[1440837531] 'agreement among raft nodes before linearized reading'  (duration: 113.32612ms)"],"step_count":1}
	
	
	==> gcp-auth [ff2189915cff] <==
	2024/09/17 17:06:43 Ready to write response ...
	2024/09/17 17:06:43 Ready to marshal response ...
	2024/09/17 17:06:43 Ready to write response ...
	2024/09/17 17:14:51 Ready to marshal response ...
	2024/09/17 17:14:51 Ready to write response ...
	2024/09/17 17:14:51 Ready to marshal response ...
	2024/09/17 17:14:51 Ready to write response ...
	2024/09/17 17:14:59 Ready to marshal response ...
	2024/09/17 17:14:59 Ready to write response ...
	2024/09/17 17:14:59 Ready to marshal response ...
	2024/09/17 17:14:59 Ready to write response ...
	2024/09/17 17:14:59 Ready to marshal response ...
	2024/09/17 17:14:59 Ready to write response ...
	2024/09/17 17:15:02 Ready to marshal response ...
	2024/09/17 17:15:02 Ready to write response ...
	2024/09/17 17:15:03 Ready to marshal response ...
	2024/09/17 17:15:03 Ready to write response ...
	2024/09/17 17:15:08 Ready to marshal response ...
	2024/09/17 17:15:08 Ready to write response ...
	2024/09/17 17:15:31 Ready to marshal response ...
	2024/09/17 17:15:31 Ready to write response ...
	2024/09/17 17:15:34 Ready to marshal response ...
	2024/09/17 17:15:34 Ready to write response ...
	2024/09/17 17:15:56 Ready to marshal response ...
	2024/09/17 17:15:56 Ready to write response ...
	
	
	==> kernel <==
	 17:16:07 up  2:14,  0 users,  load average: 2.76, 1.47, 1.42
	Linux addons-000400 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ea8f24f4c56f] <==
	W0917 17:06:35.257634       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0917 17:06:35.275307       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0917 17:06:36.084837       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0917 17:06:36.582348       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0917 17:14:59.778467       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.31.130"}
	I0917 17:15:22.348528       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0917 17:15:25.633602       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0917 17:15:40.789206       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.36:34372: read: connection reset by peer
	I0917 17:15:53.597929       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:15:53.598139       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:15:53.712451       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:15:53.712653       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:15:53.722947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:15:53.723144       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:15:53.820543       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:15:53.820786       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 17:15:53.902961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 17:15:53.903158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 17:15:54.791098       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 17:15:54.903785       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 17:15:55.014738       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0917 17:15:55.622398       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 17:15:56.212615       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.82.96"}
	I0917 17:15:59.289842       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0917 17:16:00.415556       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [428c8009479e] <==
	E0917 17:15:56.358411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:15:56.712899       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-000400"
	W0917 17:15:57.790248       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:15:57.790452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:15:58.435420       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:15:58.435534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:15:59.089315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:15:59.089454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:16:00.106324       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	E0917 17:16:00.419092       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:16:00.791281       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0917 17:16:00.791381       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:16:01.111843       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0917 17:16:01.111961       1 shared_informer.go:320] Caches are synced for garbage collector
	W0917 17:16:01.431937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:16:01.432050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:16:01.714089       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:16:01.714206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:16:02.139086       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:16:02.139197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0917 17:16:03.186552       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:16:03.186686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0917 17:16:03.399149       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="13.001µs"
	W0917 17:16:04.590131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0917 17:16:04.590295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d622d2b08b7b] <==
	E0917 16:59:42.005597       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0917 16:59:42.105725       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0917 16:59:42.221380       1 server_linux.go:66] "Using iptables proxy"
	I0917 16:59:43.212573       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 16:59:43.212661       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 16:59:44.105945       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 16:59:44.106169       1 server_linux.go:169] "Using iptables Proxier"
	I0917 16:59:44.207565       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0917 16:59:44.232368       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0917 16:59:44.305540       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0917 16:59:44.305799       1 server.go:483] "Version info" version="v1.31.1"
	I0917 16:59:44.305852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 16:59:44.308712       1 config.go:199] "Starting service config controller"
	I0917 16:59:44.308756       1 config.go:105] "Starting endpoint slice config controller"
	I0917 16:59:44.308769       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 16:59:44.308778       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 16:59:44.309200       1 config.go:328] "Starting node config controller"
	I0917 16:59:44.309212       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 16:59:44.408865       1 shared_informer.go:320] Caches are synced for service config
	I0917 16:59:44.408959       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 16:59:44.409639       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [236eb8c1333a] <==
	W0917 16:59:22.687688       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 16:59:22.687790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.703208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 16:59:22.703330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.745633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 16:59:22.745735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.759554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 16:59:22.759693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.857411       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 16:59:22.857546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.929471       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 16:59:22.929630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:22.966474       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 16:59:22.966567       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 16:59:22.974891       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 16:59:22.974982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:23.023193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 16:59:23.023330       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:23.065633       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 16:59:23.065749       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:23.073559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 16:59:23.073659       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 16:59:23.112982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 16:59:23.113112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 16:59:25.823799       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.317322    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-debugfs" (OuterVolumeSpecName: "debugfs") pod "67b9512f-ecce-4f4f-94fe-c774ad98e86a" (UID: "67b9512f-ecce-4f4f-94fe-c774ad98e86a"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.317333    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-modules" (OuterVolumeSpecName: "modules") pod "67b9512f-ecce-4f4f-94fe-c774ad98e86a" (UID: "67b9512f-ecce-4f4f-94fe-c774ad98e86a"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.317388    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-bpffs" (OuterVolumeSpecName: "bpffs") pod "67b9512f-ecce-4f4f-94fe-c774ad98e86a" (UID: "67b9512f-ecce-4f4f-94fe-c774ad98e86a"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.317399    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-run" (OuterVolumeSpecName: "run") pod "67b9512f-ecce-4f4f-94fe-c774ad98e86a" (UID: "67b9512f-ecce-4f4f-94fe-c774ad98e86a"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.322772    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67b9512f-ecce-4f4f-94fe-c774ad98e86a-kube-api-access-wp5bg" (OuterVolumeSpecName: "kube-api-access-wp5bg") pod "67b9512f-ecce-4f4f-94fe-c774ad98e86a" (UID: "67b9512f-ecce-4f4f-94fe-c774ad98e86a"). InnerVolumeSpecName "kube-api-access-wp5bg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417446    2571 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wp5bg\" (UniqueName: \"kubernetes.io/projected/67b9512f-ecce-4f4f-94fe-c774ad98e86a-kube-api-access-wp5bg\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417578    2571 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-debugfs\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417594    2571 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-host\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417605    2571 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-modules\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417617    2571 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-run\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417625    2571 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-bpffs\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:00 addons-000400 kubelet[2571]: I0917 17:16:00.417635    2571 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/67b9512f-ecce-4f4f-94fe-c774ad98e86a-cgroup\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:01 addons-000400 kubelet[2571]: I0917 17:16:01.209535    2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67b9512f-ecce-4f4f-94fe-c774ad98e86a" path="/var/lib/kubelet/pods/67b9512f-ecce-4f4f-94fe-c774ad98e86a/volumes"
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.209114    2571 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg9jg\" (UniqueName: \"kubernetes.io/projected/2320382f-5f47-49c3-8f84-c1ce72090531-kube-api-access-jg9jg\") pod \"2320382f-5f47-49c3-8f84-c1ce72090531\" (UID: \"2320382f-5f47-49c3-8f84-c1ce72090531\") "
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.209496    2571 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2320382f-5f47-49c3-8f84-c1ce72090531-gcp-creds\") pod \"2320382f-5f47-49c3-8f84-c1ce72090531\" (UID: \"2320382f-5f47-49c3-8f84-c1ce72090531\") "
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.209661    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2320382f-5f47-49c3-8f84-c1ce72090531-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2320382f-5f47-49c3-8f84-c1ce72090531" (UID: "2320382f-5f47-49c3-8f84-c1ce72090531"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.214207    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2320382f-5f47-49c3-8f84-c1ce72090531-kube-api-access-jg9jg" (OuterVolumeSpecName: "kube-api-access-jg9jg") pod "2320382f-5f47-49c3-8f84-c1ce72090531" (UID: "2320382f-5f47-49c3-8f84-c1ce72090531"). InnerVolumeSpecName "kube-api-access-jg9jg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.310668    2571 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jg9jg\" (UniqueName: \"kubernetes.io/projected/2320382f-5f47-49c3-8f84-c1ce72090531-kube-api-access-jg9jg\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:04 addons-000400 kubelet[2571]: I0917 17:16:04.310808    2571 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2320382f-5f47-49c3-8f84-c1ce72090531-gcp-creds\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:05 addons-000400 kubelet[2571]: I0917 17:16:05.015935    2571 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v88p\" (UniqueName: \"kubernetes.io/projected/48204825-30d3-43e5-afd9-4a91c6cec06d-kube-api-access-4v88p\") pod \"48204825-30d3-43e5-afd9-4a91c6cec06d\" (UID: \"48204825-30d3-43e5-afd9-4a91c6cec06d\") "
	Sep 17 17:16:05 addons-000400 kubelet[2571]: I0917 17:16:05.022591    2571 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48204825-30d3-43e5-afd9-4a91c6cec06d-kube-api-access-4v88p" (OuterVolumeSpecName: "kube-api-access-4v88p") pod "48204825-30d3-43e5-afd9-4a91c6cec06d" (UID: "48204825-30d3-43e5-afd9-4a91c6cec06d"). InnerVolumeSpecName "kube-api-access-4v88p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 17 17:16:05 addons-000400 kubelet[2571]: I0917 17:16:05.116912    2571 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4v88p\" (UniqueName: \"kubernetes.io/projected/48204825-30d3-43e5-afd9-4a91c6cec06d-kube-api-access-4v88p\") on node \"addons-000400\" DevicePath \"\""
	Sep 17 17:16:05 addons-000400 kubelet[2571]: I0917 17:16:05.213871    2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2320382f-5f47-49c3-8f84-c1ce72090531" path="/var/lib/kubelet/pods/2320382f-5f47-49c3-8f84-c1ce72090531/volumes"
	Sep 17 17:16:06 addons-000400 kubelet[2571]: I0917 17:16:06.103192    2571 scope.go:117] "RemoveContainer" containerID="6330d6f050345cb6c972f362bbdd2fda5d9c649cd00443505a08c6dcbc398182"
	Sep 17 17:16:07 addons-000400 kubelet[2571]: I0917 17:16:07.298084    2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48204825-30d3-43e5-afd9-4a91c6cec06d" path="/var/lib/kubelet/pods/48204825-30d3-43e5-afd9-4a91c6cec06d/volumes"
	
	
	==> storage-provisioner [d42a32ceab52] <==
	I0917 16:59:54.120898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 16:59:54.314964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 16:59:54.315386       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 16:59:54.708077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 16:59:54.708323       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-000400_32f41b04-c7ec-4de9-ac4c-3998dccc0a90!
	I0917 16:59:54.708454       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1821f8d0-8722-4ca6-8bb5-660c1ec597b4", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-000400_32f41b04-c7ec-4de9-ac4c-3998dccc0a90 became leader
	I0917 16:59:54.911536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-000400_32f41b04-c7ec-4de9-ac4c-3998dccc0a90!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-000400 -n addons-000400
helpers_test.go:261: (dbg) Run:  kubectl --context addons-000400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-mtsrj ingress-nginx-admission-patch-jg46w
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-000400 describe pod busybox ingress-nginx-admission-create-mtsrj ingress-nginx-admission-patch-jg46w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-000400 describe pod busybox ingress-nginx-admission-create-mtsrj ingress-nginx-admission-patch-jg46w: exit status 1 (281.4314ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-000400/192.168.49.2
	Start Time:       Tue, 17 Sep 2024 17:06:43 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v4n6m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-v4n6m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m27s                   default-scheduler  Successfully assigned default/busybox to addons-000400
	  Normal   Pulling    7m54s (x4 over 9m27s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m26s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m26s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m39s (x6 over 9m26s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m17s (x21 over 9m26s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mtsrj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jg46w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-000400 describe pod busybox ingress-nginx-admission-create-mtsrj ingress-nginx-admission-patch-jg46w: exit status 1
--- FAIL: TestAddons/parallel/Registry (79.77s)

                                                
                                    
x
+
TestErrorSpam/setup (67.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-151900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-151900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 --driver=docker: (1m7.3683392s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-151900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=19662
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-151900" primary control-plane node in "nospam-151900" cluster
* Pulling base image v0.0.45-1726589491-19662 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "nospam-151900" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (67.37s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.32s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-388800
E0917 17:20:44.688833    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:235: (dbg) docker inspect functional-388800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f",
	        "Created": "2024-09-17T17:18:44.877519695Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 45660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T17:18:45.226364349Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f/hostname",
	        "HostsPath": "/var/lib/docker/containers/ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f/hosts",
	        "LogPath": "/var/lib/docker/containers/ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f/ff7cd7a1a430554378be7651e60db3ae63d18e1a263f4c8a80b93d9329f0bb7f-json.log",
	        "Name": "/functional-388800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-388800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-388800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a48a758ea25e4d9e13ba177c6802e005570b51f0ec27e9added9d8e8a2f49cb4-init/diff:/var/lib/docker/overlay2/af5d248a82a7dcbc887b000566b84b9011e4a8e13e36234ddfbc9ecd69f656b2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a48a758ea25e4d9e13ba177c6802e005570b51f0ec27e9added9d8e8a2f49cb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a48a758ea25e4d9e13ba177c6802e005570b51f0ec27e9added9d8e8a2f49cb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a48a758ea25e4d9e13ba177c6802e005570b51f0ec27e9added9d8e8a2f49cb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-388800",
	                "Source": "/var/lib/docker/volumes/functional-388800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-388800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-388800",
	                "name.minikube.sigs.k8s.io": "functional-388800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fadd78e10d4a9673e7eb817008847ba97225ef8f01bf622de302a1c9cbdf173",
	            "SandboxKey": "/var/run/docker/netns/5fadd78e10d4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54903"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54904"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54905"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54906"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "54907"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-388800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ba1796ef525d8c6cbb7e19427a228816aeba54090b7a510e2211db23bfa0f107",
	                    "EndpointID": "aa6dc1b90cf7cae6451ec253e3b729a713efb13c44176f0ccfc265093004a967",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-388800",
	                        "ff7cd7a1a430"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-388800 -n functional-388800
E0917 17:20:44.851008    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:45.172434    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 logs -n 25
E0917 17:20:45.815001    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:47.097586    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 logs -n 25: (2.4888711s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:17 UTC | 17 Sep 24 17:17 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:17 UTC | 17 Sep 24 17:17 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:17 UTC | 17 Sep 24 17:17 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:17 UTC | 17 Sep 24 17:17 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:17 UTC | 17 Sep 24 17:18 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:18 UTC | 17 Sep 24 17:18 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-151900 --log_dir                                     | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:18 UTC | 17 Sep 24 17:18 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-151900                                            | nospam-151900     | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:18 UTC | 17 Sep 24 17:18 UTC |
	| start   | -p functional-388800                                        | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:18 UTC | 17 Sep 24 17:19 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-388800                                        | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:19 UTC | 17 Sep 24 17:20 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache add                                 | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache add                                 | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache add                                 | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache add                                 | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | minikube-local-cache-test:functional-388800                 |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache delete                              | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | minikube-local-cache-test:functional-388800                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	| ssh     | functional-388800 ssh sudo                                  | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-388800                                           | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-388800 ssh                                       | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-388800 cache reload                              | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	| ssh     | functional-388800 ssh                                       | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-388800 kubectl --                                | functional-388800 | minikube2\jenkins | v1.34.0 | 17 Sep 24 17:20 UTC | 17 Sep 24 17:20 UTC |
	|         | --context functional-388800                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:19:51
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:19:51.125544    5292 out.go:345] Setting OutFile to fd 1072 ...
	I0917 17:19:51.203601    5292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:19:51.203601    5292 out.go:358] Setting ErrFile to fd 1076...
	I0917 17:19:51.203601    5292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:19:51.229148    5292 out.go:352] Setting JSON to false
	I0917 17:19:51.232500    5292 start.go:129] hostinfo: {"hostname":"minikube2","uptime":8318,"bootTime":1726585272,"procs":180,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 17:19:51.232500    5292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 17:19:51.236463    5292 out.go:177] * [functional-388800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 17:19:51.239394    5292 notify.go:220] Checking for updates...
	I0917 17:19:51.240908    5292 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:19:51.243239    5292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:19:51.246136    5292 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 17:19:51.248546    5292 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:19:51.250893    5292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:19:51.254185    5292 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:19:51.254185    5292 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:19:51.448815    5292 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 17:19:51.456814    5292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:19:51.789881    5292 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-17 17:19:51.763906656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 17:19:51.793888    5292 out.go:177] * Using the docker driver based on existing profile
	I0917 17:19:51.796892    5292 start.go:297] selected driver: docker
	I0917 17:19:51.796892    5292 start.go:901] validating driver "docker" against &{Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:19:51.796892    5292 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:19:51.813904    5292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:19:52.137296    5292 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-17 17:19:52.107710547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 17:19:52.249883    5292 cni.go:84] Creating CNI manager for ""
	I0917 17:19:52.249883    5292 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 17:19:52.249883    5292 start.go:340] cluster config:
	{Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:19:52.256335    5292 out.go:177] * Starting "functional-388800" primary control-plane node in "functional-388800" cluster
	I0917 17:19:52.259538    5292 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 17:19:52.262603    5292 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0917 17:19:52.265786    5292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 17:19:52.266315    5292 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 17:19:52.266449    5292 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 17:19:52.266449    5292 cache.go:56] Caching tarball of preloaded images
	I0917 17:19:52.267039    5292 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0917 17:19:52.267039    5292 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0917 17:19:52.267039    5292 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\config.json ...
	W0917 17:19:52.379679    5292 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0917 17:19:52.379679    5292 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 17:19:52.379679    5292 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 17:19:52.380258    5292 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 17:19:52.380258    5292 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 17:19:52.380330    5292 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 17:19:52.380330    5292 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 17:19:52.380330    5292 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 17:19:52.380330    5292 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0917 17:19:52.380330    5292 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 17:19:52.393702    5292 image.go:273] response: 
	I0917 17:19:52.739516    5292 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0917 17:19:52.739516    5292 cache.go:194] Successfully downloaded all kic artifacts
	I0917 17:19:52.739516    5292 start.go:360] acquireMachinesLock for functional-388800: {Name:mk60f463d0f943e65f2a3aeeef860793f6f0e517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:19:52.740117    5292 start.go:364] duration metric: took 600.6µs to acquireMachinesLock for "functional-388800"
	I0917 17:19:52.740374    5292 start.go:96] Skipping create...Using existing machine configuration
	I0917 17:19:52.740423    5292 fix.go:54] fixHost starting: 
	I0917 17:19:52.754542    5292 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
	I0917 17:19:52.827546    5292 fix.go:112] recreateIfNeeded on functional-388800: state=Running err=<nil>
	W0917 17:19:52.827546    5292 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 17:19:52.830559    5292 out.go:177] * Updating the running docker "functional-388800" container ...
	I0917 17:19:52.833543    5292 machine.go:93] provisionDockerMachine start ...
	I0917 17:19:52.841547    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:52.912587    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:52.912587    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:52.912587    5292 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:19:53.171690    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-388800
	
	I0917 17:19:53.171690    5292 ubuntu.go:169] provisioning hostname "functional-388800"
	I0917 17:19:53.178935    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:53.265707    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:53.266029    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:53.266029    5292 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-388800 && echo "functional-388800" | sudo tee /etc/hostname
	I0917 17:19:53.483062    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-388800
	
	I0917 17:19:53.495174    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:53.585409    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:53.586032    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:53.586032    5292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-388800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-388800/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-388800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:19:53.782763    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:53.782763    5292 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0917 17:19:53.782763    5292 ubuntu.go:177] setting up certificates
	I0917 17:19:53.782763    5292 provision.go:84] configureAuth start
	I0917 17:19:53.792061    5292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-388800
	I0917 17:19:53.869999    5292 provision.go:143] copyHostCerts
	I0917 17:19:53.870179    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem
	I0917 17:19:53.870179    5292 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0917 17:19:53.870179    5292 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0917 17:19:53.871366    5292 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0917 17:19:53.872869    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem
	I0917 17:19:53.872962    5292 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0917 17:19:53.872962    5292 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0917 17:19:53.873518    5292 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0917 17:19:53.874624    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem
	I0917 17:19:53.874911    5292 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0917 17:19:53.874911    5292 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0917 17:19:53.875240    5292 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1679 bytes)
	I0917 17:19:53.876539    5292 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-388800 san=[127.0.0.1 192.168.49.2 functional-388800 localhost minikube]
	I0917 17:19:54.027057    5292 provision.go:177] copyRemoteCerts
	I0917 17:19:54.041557    5292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:19:54.048540    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:54.126111    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:19:54.261003    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0917 17:19:54.261003    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:19:54.308175    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0917 17:19:54.308175    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 17:19:54.357133    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0917 17:19:54.357133    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 17:19:54.401473    5292 provision.go:87] duration metric: took 618.7039ms to configureAuth
	I0917 17:19:54.401473    5292 ubuntu.go:193] setting minikube options for container-runtime
	I0917 17:19:54.402211    5292 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:19:54.413842    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:54.490259    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:54.491056    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:54.491056    5292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0917 17:19:54.693150    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0917 17:19:54.693150    5292 ubuntu.go:71] root file system type: overlay
	I0917 17:19:54.693805    5292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0917 17:19:54.702540    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:54.787848    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:54.787848    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:54.787848    5292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0917 17:19:55.010662    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0917 17:19:55.018892    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:55.105796    5292 main.go:141] libmachine: Using SSH client type: native
	I0917 17:19:55.106378    5292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf99a00] 0xf9c540 <nil>  [] 0s} 127.0.0.1 54903 <nil> <nil>}
	I0917 17:19:55.106378    5292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0917 17:19:55.300455    5292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:19:55.300455    5292 machine.go:96] duration metric: took 2.4668899s to provisionDockerMachine
	I0917 17:19:55.300455    5292 start.go:293] postStartSetup for "functional-388800" (driver="docker")
	I0917 17:19:55.300455    5292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:19:55.314132    5292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:19:55.320271    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:55.395311    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:19:55.544801    5292 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:19:55.556926    5292 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0917 17:19:55.556926    5292 command_runner.go:130] > NAME="Ubuntu"
	I0917 17:19:55.556926    5292 command_runner.go:130] > VERSION_ID="22.04"
	I0917 17:19:55.556926    5292 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0917 17:19:55.557002    5292 command_runner.go:130] > VERSION_CODENAME=jammy
	I0917 17:19:55.557002    5292 command_runner.go:130] > ID=ubuntu
	I0917 17:19:55.557002    5292 command_runner.go:130] > ID_LIKE=debian
	I0917 17:19:55.557002    5292 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0917 17:19:55.557002    5292 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0917 17:19:55.557002    5292 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0917 17:19:55.557045    5292 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0917 17:19:55.557081    5292 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0917 17:19:55.557171    5292 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 17:19:55.557225    5292 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 17:19:55.557225    5292 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 17:19:55.557225    5292 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 17:19:55.557283    5292 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0917 17:19:55.557325    5292 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0917 17:19:55.558324    5292 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> 29682.pem in /etc/ssl/certs
	I0917 17:19:55.558452    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /etc/ssl/certs/29682.pem
	I0917 17:19:55.559617    5292 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> hosts in /etc/test/nested/copy/2968
	I0917 17:19:55.559667    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts -> /etc/test/nested/copy/2968/hosts
	I0917 17:19:55.570907    5292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2968
	I0917 17:19:55.590858    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /etc/ssl/certs/29682.pem (1708 bytes)
	I0917 17:19:55.638601    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts --> /etc/test/nested/copy/2968/hosts (40 bytes)
	I0917 17:19:55.688632    5292 start.go:296] duration metric: took 388.1735ms for postStartSetup
	I0917 17:19:55.703104    5292 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:19:55.710353    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:55.787590    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:19:55.913840    5292 command_runner.go:130] > 1%
	I0917 17:19:55.924880    5292 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 17:19:55.939398    5292 command_runner.go:130] > 951G
	I0917 17:19:55.940708    5292 fix.go:56] duration metric: took 3.2001604s for fixHost
	I0917 17:19:55.940708    5292 start.go:83] releasing machines lock for "functional-388800", held for 3.2004841s
	I0917 17:19:55.948905    5292 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-388800
	I0917 17:19:56.026451    5292 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0917 17:19:56.035795    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:56.038773    5292 ssh_runner.go:195] Run: cat /version.json
	I0917 17:19:56.046768    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:19:56.118443    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:19:56.123568    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:19:56.252013    5292 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "5a41bb88865072da065bae8afc650aba3c742a66"}
	I0917 17:19:56.260430    5292 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W0917 17:19:56.260982    5292 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0917 17:19:56.265419    5292 ssh_runner.go:195] Run: systemctl --version
	I0917 17:19:56.278874    5292 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0917 17:19:56.279079    5292 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0917 17:19:56.291865    5292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 17:19:56.306010    5292 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0917 17:19:56.306010    5292 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0917 17:19:56.306010    5292 command_runner.go:130] > Device: 8ah/138d	Inode: 224         Links: 1
	I0917 17:19:56.306010    5292 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:19:56.306010    5292 command_runner.go:130] > Access: 2024-09-17 16:57:33.980583227 +0000
	I0917 17:19:56.306010    5292 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0917 17:19:56.306010    5292 command_runner.go:130] > Change: 2024-09-17 16:56:55.374108350 +0000
	I0917 17:19:56.306010    5292 command_runner.go:130] >  Birth: 2024-09-17 16:56:55.374108350 +0000
	I0917 17:19:56.324300    5292 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 17:19:56.345193    5292 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0917 17:19:56.348061    5292 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0917 17:19:56.360427    5292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0917 17:19:56.374136    5292 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0917 17:19:56.374136    5292 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0917 17:19:56.383860    5292 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 17:19:56.383860    5292 start.go:495] detecting cgroup driver to use...
	I0917 17:19:56.383860    5292 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 17:19:56.383860    5292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:19:56.420287    5292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0917 17:19:56.433284    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 17:19:56.470312    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 17:19:56.495212    5292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 17:19:56.507250    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 17:19:56.544720    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 17:19:56.586022    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 17:19:56.620839    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 17:19:56.657355    5292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:19:56.690292    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 17:19:56.724741    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 17:19:56.763892    5292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 17:19:56.799961    5292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:19:56.823241    5292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0917 17:19:56.836239    5292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:19:56.868881    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:19:57.098297    5292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 17:20:07.591864    5292 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.4933786s)
	I0917 17:20:07.591864    5292 start.go:495] detecting cgroup driver to use...
	I0917 17:20:07.591975    5292 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 17:20:07.606009    5292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0917 17:20:07.635522    5292 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0917 17:20:07.635672    5292 command_runner.go:130] > [Unit]
	I0917 17:20:07.635699    5292 command_runner.go:130] > Description=Docker Application Container Engine
	I0917 17:20:07.635699    5292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0917 17:20:07.635699    5292 command_runner.go:130] > BindsTo=containerd.service
	I0917 17:20:07.635699    5292 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0917 17:20:07.635783    5292 command_runner.go:130] > Wants=network-online.target
	I0917 17:20:07.635783    5292 command_runner.go:130] > Requires=docker.socket
	I0917 17:20:07.635783    5292 command_runner.go:130] > StartLimitBurst=3
	I0917 17:20:07.635783    5292 command_runner.go:130] > StartLimitIntervalSec=60
	I0917 17:20:07.635783    5292 command_runner.go:130] > [Service]
	I0917 17:20:07.635783    5292 command_runner.go:130] > Type=notify
	I0917 17:20:07.635783    5292 command_runner.go:130] > Restart=on-failure
	I0917 17:20:07.635858    5292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0917 17:20:07.635858    5292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0917 17:20:07.635895    5292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0917 17:20:07.635933    5292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0917 17:20:07.635971    5292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0917 17:20:07.635971    5292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0917 17:20:07.636043    5292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0917 17:20:07.636077    5292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0917 17:20:07.636122    5292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0917 17:20:07.636122    5292 command_runner.go:130] > ExecStart=
	I0917 17:20:07.636156    5292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0917 17:20:07.636192    5292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0917 17:20:07.636192    5292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0917 17:20:07.636250    5292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0917 17:20:07.636250    5292 command_runner.go:130] > LimitNOFILE=infinity
	I0917 17:20:07.636250    5292 command_runner.go:130] > LimitNPROC=infinity
	I0917 17:20:07.636250    5292 command_runner.go:130] > LimitCORE=infinity
	I0917 17:20:07.636250    5292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0917 17:20:07.636250    5292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0917 17:20:07.636250    5292 command_runner.go:130] > TasksMax=infinity
	I0917 17:20:07.636250    5292 command_runner.go:130] > TimeoutStartSec=0
	I0917 17:20:07.636250    5292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0917 17:20:07.636250    5292 command_runner.go:130] > Delegate=yes
	I0917 17:20:07.636250    5292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0917 17:20:07.636250    5292 command_runner.go:130] > KillMode=process
	I0917 17:20:07.636250    5292 command_runner.go:130] > [Install]
	I0917 17:20:07.636250    5292 command_runner.go:130] > WantedBy=multi-user.target
	I0917 17:20:07.636250    5292 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0917 17:20:07.647689    5292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 17:20:07.676253    5292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:20:07.710633    5292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0917 17:20:07.728382    5292 ssh_runner.go:195] Run: which cri-dockerd
	I0917 17:20:07.741151    5292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0917 17:20:07.754326    5292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0917 17:20:07.776666    5292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0917 17:20:07.833285    5292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0917 17:20:08.096311    5292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0917 17:20:08.293200    5292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0917 17:20:08.293384    5292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0917 17:20:08.345800    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:08.522406    5292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0917 17:20:09.524558    5292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0021429s)
	I0917 17:20:09.536832    5292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0917 17:20:09.576867    5292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0917 17:20:09.621682    5292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 17:20:09.661462    5292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0917 17:20:09.836368    5292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0917 17:20:10.014385    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:10.173467    5292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0917 17:20:10.217069    5292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0917 17:20:10.255790    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:10.428920    5292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0917 17:20:10.591625    5292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0917 17:20:10.605109    5292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0917 17:20:10.618970    5292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0917 17:20:10.619105    5292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0917 17:20:10.619105    5292 command_runner.go:130] > Device: 93h/147d	Inode: 741         Links: 1
	I0917 17:20:10.619105    5292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0917 17:20:10.619105    5292 command_runner.go:130] > Access: 2024-09-17 17:20:10.555213482 +0000
	I0917 17:20:10.619105    5292 command_runner.go:130] > Modify: 2024-09-17 17:20:10.445199990 +0000
	I0917 17:20:10.619200    5292 command_runner.go:130] > Change: 2024-09-17 17:20:10.445199990 +0000
	I0917 17:20:10.619200    5292 command_runner.go:130] >  Birth: -
	I0917 17:20:10.619304    5292 start.go:563] Will wait 60s for crictl version
	I0917 17:20:10.640910    5292 ssh_runner.go:195] Run: which crictl
	I0917 17:20:10.650918    5292 command_runner.go:130] > /usr/bin/crictl
	I0917 17:20:10.664914    5292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:20:10.781129    5292 command_runner.go:130] > Version:  0.1.0
	I0917 17:20:10.781129    5292 command_runner.go:130] > RuntimeName:  docker
	I0917 17:20:10.781129    5292 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0917 17:20:10.781129    5292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0917 17:20:10.781129    5292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0917 17:20:10.790204    5292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 17:20:10.849522    5292 command_runner.go:130] > 27.2.1
	I0917 17:20:10.857964    5292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0917 17:20:10.914746    5292 command_runner.go:130] > 27.2.1
	I0917 17:20:10.919773    5292 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0917 17:20:10.930137    5292 cli_runner.go:164] Run: docker exec -t functional-388800 dig +short host.docker.internal
	I0917 17:20:11.116343    5292 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0917 17:20:11.128324    5292 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0917 17:20:11.141847    5292 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0917 17:20:11.149733    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-388800
	I0917 17:20:11.228586    5292 kubeadm.go:883] updating cluster {Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:20:11.228852    5292 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 17:20:11.237767    5292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 17:20:11.279719    5292 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 17:20:11.279719    5292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:20:11.286143    5292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 17:20:11.286227    5292 docker.go:615] Images already preloaded, skipping extraction
	I0917 17:20:11.293542    5292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0917 17:20:11.340932    5292 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0917 17:20:11.341054    5292 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0917 17:20:11.341054    5292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:20:11.341122    5292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0917 17:20:11.341168    5292 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:20:11.341168    5292 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 docker true true} ...
	I0917 17:20:11.341379    5292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-388800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:20:11.350034    5292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0917 17:20:11.435492    5292 command_runner.go:130] > cgroupfs
	I0917 17:20:11.441033    5292 cni.go:84] Creating CNI manager for ""
	I0917 17:20:11.441033    5292 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 17:20:11.441033    5292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:20:11.441033    5292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-388800 NodeName:functional-388800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:20:11.441033    5292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-388800"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:20:11.455359    5292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:20:11.478163    5292 command_runner.go:130] > kubeadm
	I0917 17:20:11.478163    5292 command_runner.go:130] > kubectl
	I0917 17:20:11.478163    5292 command_runner.go:130] > kubelet
	I0917 17:20:11.478163    5292 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:20:11.491691    5292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 17:20:11.513612    5292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0917 17:20:11.550318    5292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:20:11.585623    5292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0917 17:20:11.633369    5292 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 17:20:11.645115    5292 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0917 17:20:11.657816    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:11.830514    5292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:20:11.857005    5292 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800 for IP: 192.168.49.2
	I0917 17:20:11.857005    5292 certs.go:194] generating shared ca certs ...
	I0917 17:20:11.857005    5292 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:20:11.857775    5292 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0917 17:20:11.858281    5292 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0917 17:20:11.858281    5292 certs.go:256] generating profile certs ...
	I0917 17:20:11.859444    5292 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\client.key
	I0917 17:20:11.859985    5292 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\apiserver.key.83dc706e
	I0917 17:20:11.860361    5292 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\proxy-client.key
	I0917 17:20:11.860361    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0917 17:20:11.860361    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0917 17:20:11.860361    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0917 17:20:11.860891    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0917 17:20:11.861100    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0917 17:20:11.861208    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0917 17:20:11.861559    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0917 17:20:11.861559    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0917 17:20:11.862485    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\2968.pem (1338 bytes)
	W0917 17:20:11.862485    5292 certs.go:480] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\2968_empty.pem, impossibly tiny 0 bytes
	I0917 17:20:11.862485    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0917 17:20:11.863311    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0917 17:20:11.863800    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0917 17:20:11.863800    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0917 17:20:11.864615    5292 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem (1708 bytes)
	I0917 17:20:11.864832    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:20:11.865037    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\2968.pem -> /usr/share/ca-certificates/2968.pem
	I0917 17:20:11.865184    5292 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem -> /usr/share/ca-certificates/29682.pem
	I0917 17:20:11.865184    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:20:11.914720    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:20:11.964654    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:20:12.010198    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 17:20:12.057562    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 17:20:12.109391    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:20:12.158918    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:20:12.208846    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-388800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 17:20:12.259253    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:20:12.305723    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\2968.pem --> /usr/share/ca-certificates/2968.pem (1338 bytes)
	I0917 17:20:12.353229    5292 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\29682.pem --> /usr/share/ca-certificates/29682.pem (1708 bytes)
	I0917 17:20:12.400817    5292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:20:12.447724    5292 ssh_runner.go:195] Run: openssl version
	I0917 17:20:12.463480    5292 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0917 17:20:12.476037    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2968.pem && ln -fs /usr/share/ca-certificates/2968.pem /etc/ssl/certs/2968.pem"
	I0917 17:20:12.512367    5292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2968.pem
	I0917 17:20:12.530601    5292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 17 17:18 /usr/share/ca-certificates/2968.pem
	I0917 17:20:12.530682    5292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 17:18 /usr/share/ca-certificates/2968.pem
	I0917 17:20:12.543103    5292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2968.pem
	I0917 17:20:12.559078    5292 command_runner.go:130] > 51391683
	I0917 17:20:12.573514    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2968.pem /etc/ssl/certs/51391683.0"
	I0917 17:20:12.607636    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29682.pem && ln -fs /usr/share/ca-certificates/29682.pem /etc/ssl/certs/29682.pem"
	I0917 17:20:12.642197    5292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29682.pem
	I0917 17:20:12.655645    5292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 17 17:18 /usr/share/ca-certificates/29682.pem
	I0917 17:20:12.655645    5292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 17:18 /usr/share/ca-certificates/29682.pem
	I0917 17:20:12.667690    5292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29682.pem
	I0917 17:20:12.687644    5292 command_runner.go:130] > 3ec20f2e
	I0917 17:20:12.700484    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29682.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 17:20:12.733802    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:20:12.768169    5292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:20:12.780191    5292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 17 16:59 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:20:12.780772    5292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 16:59 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:20:12.796363    5292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:20:12.813162    5292 command_runner.go:130] > b5213941
	I0917 17:20:12.829084    5292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:20:12.866072    5292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:20:12.875003    5292 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:20:12.875003    5292 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0917 17:20:12.876033    5292 command_runner.go:130] > Device: 830h/2096d	Inode: 17030       Links: 1
	I0917 17:20:12.876033    5292 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0917 17:20:12.876033    5292 command_runner.go:130] > Access: 2024-09-17 17:19:02.009477298 +0000
	I0917 17:20:12.876033    5292 command_runner.go:130] > Modify: 2024-09-17 17:19:02.009477298 +0000
	I0917 17:20:12.876033    5292 command_runner.go:130] > Change: 2024-09-17 17:19:02.009477298 +0000
	I0917 17:20:12.876033    5292 command_runner.go:130] >  Birth: 2024-09-17 17:19:02.009477298 +0000
	I0917 17:20:12.886037    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 17:20:12.899983    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:12.910992    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 17:20:12.931055    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:12.944754    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 17:20:12.969328    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:12.982666    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 17:20:12.997217    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:13.008124    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 17:20:13.023307    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:13.034289    5292 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 17:20:13.051219    5292 command_runner.go:130] > Certificate will not expire
	I0917 17:20:13.051610    5292 kubeadm.go:392] StartCluster: {Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:20:13.058916    5292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0917 17:20:13.119742    5292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 17:20:13.142280    5292 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0917 17:20:13.142280    5292 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0917 17:20:13.142280    5292 command_runner.go:130] > /var/lib/minikube/etcd:
	I0917 17:20:13.142280    5292 command_runner.go:130] > member
	I0917 17:20:13.142280    5292 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0917 17:20:13.142280    5292 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0917 17:20:13.153965    5292 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0917 17:20:13.172330    5292 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0917 17:20:13.180376    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-388800
	I0917 17:20:13.244346    5292 kubeconfig.go:125] found "functional-388800" server: "https://127.0.0.1:54907"
	I0917 17:20:13.246340    5292 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:20:13.246340    5292 kapi.go:59] client config for functional-388800: &rest.Config{Host:"https://127.0.0.1:54907", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2673d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 17:20:13.248337    5292 cert_rotation.go:140] Starting client certificate rotation controller
	I0917 17:20:13.261347    5292 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0917 17:20:13.283890    5292 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0917 17:20:13.283890    5292 kubeadm.go:597] duration metric: took 141.6087ms to restartPrimaryControlPlane
	I0917 17:20:13.283890    5292 kubeadm.go:394] duration metric: took 232.2783ms to StartCluster
	I0917 17:20:13.283890    5292 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:20:13.284507    5292 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:20:13.285310    5292 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:20:13.286134    5292 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0917 17:20:13.286134    5292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0917 17:20:13.287506    5292 addons.go:69] Setting storage-provisioner=true in profile "functional-388800"
	I0917 17:20:13.287506    5292 addons.go:234] Setting addon storage-provisioner=true in "functional-388800"
	W0917 17:20:13.287586    5292 addons.go:243] addon storage-provisioner should already be in state true
	I0917 17:20:13.287586    5292 addons.go:69] Setting default-storageclass=true in profile "functional-388800"
	I0917 17:20:13.287655    5292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-388800"
	I0917 17:20:13.287823    5292 host.go:66] Checking if "functional-388800" exists ...
	I0917 17:20:13.287950    5292 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:20:13.298611    5292 out.go:177] * Verifying Kubernetes components...
	I0917 17:20:13.307531    5292 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
	I0917 17:20:13.311150    5292 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
	I0917 17:20:13.315839    5292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:20:13.390395    5292 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:20:13.390395    5292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:20:13.391095    5292 kapi.go:59] client config for functional-388800: &rest.Config{Host:"https://127.0.0.1:54907", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2673d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0917 17:20:13.391763    5292 addons.go:234] Setting addon default-storageclass=true in "functional-388800"
	W0917 17:20:13.391763    5292 addons.go:243] addon default-storageclass should already be in state true
	I0917 17:20:13.391763    5292 host.go:66] Checking if "functional-388800" exists ...
	I0917 17:20:13.393113    5292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:13.393166    5292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 17:20:13.402079    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:20:13.415811    5292 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
	I0917 17:20:13.476867    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:20:13.483844    5292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:13.483844    5292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 17:20:13.492894    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
	I0917 17:20:13.508849    5292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:20:13.540866    5292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-388800
	I0917 17:20:13.558844    5292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
	I0917 17:20:13.604887    5292 node_ready.go:35] waiting up to 6m0s for node "functional-388800" to be "Ready" ...
	I0917 17:20:13.604887    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:13.604887    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:13.604887    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:13.604887    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:13.607849    5292 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0917 17:20:13.607849    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:13.633900    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:13.729107    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:13.766622    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:13.771805    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:13.771995    5292 retry.go:31] will retry after 298.7478ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:13.966696    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:13.975985    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:13.976118    5292 retry.go:31] will retry after 309.730871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:14.083388    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:14.207570    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:14.209042    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:14.209042    5292 retry.go:31] will retry after 382.378068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:14.299833    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:14.465811    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:14.472832    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:14.472957    5292 retry.go:31] will retry after 315.943288ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:14.608505    5292 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:14.608505    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:14.608505    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:14.608505    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:14.608505    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:14.608505    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:14.611324    5292 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0917 17:20:14.611324    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:14.803344    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:15.174358    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:15.181184    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:15.181184    5292 retry.go:31] will retry after 827.218157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:15.477407    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:15.483184    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:15.483184    5292 retry.go:31] will retry after 539.45973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:15.611492    5292 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:15.611492    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:15.611492    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:15.611492    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:15.611492    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:15.615792    5292 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0917 17:20:15.615845    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:16.021747    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:16.035386    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:16.616206    5292 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:16.616206    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:16.616206    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:16.616206    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:16.616206    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:16.620306    5292 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0917 17:20:16.620374    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:16.672953    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:16.680352    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:16.680352    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:16.680352    5292 retry.go:31] will retry after 439.318947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:16.680352    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:16.681138    5292 retry.go:31] will retry after 1.235028962s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:17.133509    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:17.620751    5292 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:17.621280    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:17.621318    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:17.621384    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:17.621384    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:17.624494    5292 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0917 17:20:17.624494    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:17.766710    5292 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 17:20:17.774425    5292 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:17.774425    5292 retry.go:31] will retry after 958.022227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 17:20:17.929518    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:20:18.625698    5292 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:18.625698    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:18.625698    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:18.625698    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:18.625698    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:18.745252    5292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:20:21.870787    5292 round_trippers.go:574] Response Status: 200 OK in 3245 milliseconds
	I0917 17:20:21.870787    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:21.870787    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0917 17:20:21.870787    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:21 GMT
	I0917 17:20:21.870787    5292 round_trippers.go:580]     Audit-Id: c472f2a9-647d-4d62-aa1a-05af862972dc
	I0917 17:20:21.870787    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:21.870787    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:21.870787    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0917 17:20:21.870787    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:21.871774    5292 node_ready.go:49] node "functional-388800" has status "Ready":"True"
	I0917 17:20:21.872815    5292 node_ready.go:38] duration metric: took 8.2678549s for node "functional-388800" to be "Ready" ...
	I0917 17:20:21.872815    5292 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:21.872984    5292 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0917 17:20:21.873051    5292 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0917 17:20:21.873165    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods
	I0917 17:20:21.873369    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:21.873419    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:21.873419    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.161594    5292 round_trippers.go:574] Response Status: 200 OK in 288 milliseconds
	I0917 17:20:22.161669    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.161669    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.161669    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.161669    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.161669    5292 round_trippers.go:580]     Audit-Id: cb090589-fea5-47d4-973e-1ddaa28e09bd
	I0917 17:20:22.161669    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.161816    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.165852    5292 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bsr8x","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"257a7451-7022-4de7-bb4c-485d3c48dac3","resourceVersion":"448","creationTimestamp":"2024-09-17T17:19:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"25cb4a1d-e859-4d18-a9ed-50f43997ac7c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25cb4a1d-e859-4d18-a9ed-50f43997ac7c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51630 chars]
	I0917 17:20:22.173786    5292 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bsr8x" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.173786    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsr8x
	I0917 17:20:22.173786    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.173786    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.173786    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.272638    5292 round_trippers.go:574] Response Status: 200 OK in 98 milliseconds
	I0917 17:20:22.272638    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.272638    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.272638    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.272638    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.272638    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.272638    5292 round_trippers.go:580]     Audit-Id: 1cfa5912-0e1a-4409-9710-912fd20e1633
	I0917 17:20:22.272638    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.272638    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-bsr8x","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"257a7451-7022-4de7-bb4c-485d3c48dac3","resourceVersion":"448","creationTimestamp":"2024-09-17T17:19:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"25cb4a1d-e859-4d18-a9ed-50f43997ac7c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25cb4a1d-e859-4d18-a9ed-50f43997ac7c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6495 chars]
	I0917 17:20:22.274176    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.274176    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.274176    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.274176    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.374680    5292 round_trippers.go:574] Response Status: 200 OK in 100 milliseconds
	I0917 17:20:22.374680    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.374680    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.374680    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.374680    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.374680    5292 round_trippers.go:580]     Audit-Id: 42066448-4433-4a86-92ac-2fa022d4e9d8
	I0917 17:20:22.374680    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.374680    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.374680    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:22.375667    5292 pod_ready.go:93] pod "coredns-7c65d6cfc9-bsr8x" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.375667    5292 pod_ready.go:82] duration metric: took 201.8793ms for pod "coredns-7c65d6cfc9-bsr8x" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.375667    5292 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.375667    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/etcd-functional-388800
	I0917 17:20:22.375667    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.375667    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.375667    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.382745    5292 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:22.383468    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.383468    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.383468    5292 round_trippers.go:580]     Audit-Id: 5b163ab7-bf2f-4e9f-8007-e8628ef18873
	I0917 17:20:22.383468    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.383468    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.383518    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.383518    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.383518    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-388800","namespace":"kube-system","uid":"b35932f8-bbb1-432d-b941-eca0784289c6","resourceVersion":"403","creationTimestamp":"2024-09-17T17:19:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"d848d70a3305f4c0b644e3d19c8db1e7","kubernetes.io/config.mirror":"d848d70a3305f4c0b644e3d19c8db1e7","kubernetes.io/config.seen":"2024-09-17T17:19:16.469353018Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6459 chars]
	I0917 17:20:22.384258    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.384303    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.384303    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.384303    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.460112    5292 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0917 17:20:22.460345    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.460485    5292 round_trippers.go:580]     Audit-Id: 7b392516-080e-42a7-97f1-426617efff04
	I0917 17:20:22.460485    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.460485    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.460659    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.460659    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.460659    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.461047    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:22.464083    5292 pod_ready.go:93] pod "etcd-functional-388800" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.464131    5292 pod_ready.go:82] duration metric: took 88.4633ms for pod "etcd-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.464131    5292 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.464277    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-388800
	I0917 17:20:22.464434    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.464434    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.464434    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.560522    5292 round_trippers.go:574] Response Status: 200 OK in 95 milliseconds
	I0917 17:20:22.560642    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.560642    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.560642    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.560709    5292 round_trippers.go:580]     Audit-Id: fcd3039b-1e1b-4aaa-bbca-4210fecd1970
	I0917 17:20:22.560709    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.560709    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.560709    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.560800    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-388800","namespace":"kube-system","uid":"cca55773-f804-4852-82ef-cbefb2803abd","resourceVersion":"316","creationTimestamp":"2024-09-17T17:19:14Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"2498186d0fb55daa98403d4ad1c3d45c","kubernetes.io/config.mirror":"2498186d0fb55daa98403d4ad1c3d45c","kubernetes.io/config.seen":"2024-09-17T17:19:06.999273759Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8535 chars]
	I0917 17:20:22.562100    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.562334    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.562334    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.562334    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.573966    5292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 17:20:22.573966    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.573966    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.573966    5292 round_trippers.go:580]     Audit-Id: aa84b6ee-2f5a-4ce4-b406-b6cb3accb0a4
	I0917 17:20:22.573966    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.573966    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.573966    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.573966    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.573966    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:22.574914    5292 pod_ready.go:93] pod "kube-apiserver-functional-388800" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.574914    5292 pod_ready.go:82] duration metric: took 110.6732ms for pod "kube-apiserver-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.574914    5292 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.574914    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-388800
	I0917 17:20:22.574914    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.574914    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.574914    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.587043    5292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 17:20:22.587171    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.587171    5292 round_trippers.go:580]     Audit-Id: dc344576-f32b-41cc-96e8-8a9f43156a64
	I0917 17:20:22.587171    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.587247    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.587247    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.587247    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.587247    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.587658    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-388800","namespace":"kube-system","uid":"41e00752-31f2-4aab-8fc5-8e16af69743e","resourceVersion":"323","creationTimestamp":"2024-09-17T17:19:15Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"648f7e55302d976a4ced2ee4a7d51746","kubernetes.io/config.mirror":"648f7e55302d976a4ced2ee4a7d51746","kubernetes.io/config.seen":"2024-09-17T17:19:06.999276559Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8110 chars]
	I0917 17:20:22.588535    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.588635    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.588635    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.588635    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.660008    5292 round_trippers.go:574] Response Status: 200 OK in 71 milliseconds
	I0917 17:20:22.660131    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.660196    5292 round_trippers.go:580]     Audit-Id: bfcde369-1980-4dff-bafc-b1f83c99da1f
	I0917 17:20:22.660196    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.660252    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.660252    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.660252    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.660252    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.660252    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:22.661390    5292 pod_ready.go:93] pod "kube-controller-manager-functional-388800" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.661496    5292 pod_ready.go:82] duration metric: took 86.4743ms for pod "kube-controller-manager-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.661496    5292 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6f5gv" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.661667    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-proxy-6f5gv
	I0917 17:20:22.661667    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.661766    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.661810    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.681595    5292 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0917 17:20:22.681595    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.681595    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.681751    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.681751    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.681751    5292 round_trippers.go:580]     Audit-Id: 2716273d-dafd-4475-aabe-d3bca4e61a4f
	I0917 17:20:22.681751    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.681751    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.682013    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-6f5gv","generateName":"kube-proxy-","namespace":"kube-system","uid":"b9e528f6-ab8e-4843-87ee-3df1231076c1","resourceVersion":"419","creationTimestamp":"2024-09-17T17:19:21Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b51d8ca-101d-46dd-8d82-2d24d04fa8f3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b51d8ca-101d-46dd-8d82-2d24d04fa8f3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6191 chars]
	I0917 17:20:22.682719    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.682719    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.682719    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.682719    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.694577    5292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 17:20:22.695426    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.695426    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.695426    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.695426    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.695491    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.695491    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.695491    5292 round_trippers.go:580]     Audit-Id: bbda453b-43cd-49c4-b350-e32816a3022d
	I0917 17:20:22.695595    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:22.695996    5292 pod_ready.go:93] pod "kube-proxy-6f5gv" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:22.695996    5292 pod_ready.go:82] duration metric: took 34.4993ms for pod "kube-proxy-6f5gv" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.695996    5292 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:22.695996    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:22.695996    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.695996    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.695996    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.762326    5292 round_trippers.go:574] Response Status: 200 OK in 66 milliseconds
	I0917 17:20:22.762416    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.762416    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.762416    5292 round_trippers.go:580]     Audit-Id: 75d075c6-2e5e-493e-8c41-fddee8a9b3f6
	I0917 17:20:22.762416    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.762416    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.762585    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.762585    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.762815    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"463","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5396 chars]
	I0917 17:20:22.763414    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:22.763543    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:22.763543    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:22.763543    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:22.769384    5292 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:22.769384    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:22.769384    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:22.769384    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:22 GMT
	I0917 17:20:22.769931    5292 round_trippers.go:580]     Audit-Id: c1ab5cff-fdcc-4353-bf3e-bcc295a32bab
	I0917 17:20:22.769931    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:22.769931    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:22.769931    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:22.770102    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:23.196389    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:23.196389    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:23.196389    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.196389    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:23.216053    5292 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0917 17:20:23.216053    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:23.216053    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:23 GMT
	I0917 17:20:23.216053    5292 round_trippers.go:580]     Audit-Id: 00dd57b0-d58c-4dc7-bab8-04b8bc64feeb
	I0917 17:20:23.216053    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:23.216053    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:23.216053    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:23.216053    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:23.216742    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:23.217785    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:23.217785    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:23.217878    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.217878    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:23.262532    5292 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I0917 17:20:23.262532    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:23.262656    5292 round_trippers.go:580]     Audit-Id: da72ef40-5b3b-4cb8-8a60-099e5faef1e8
	I0917 17:20:23.262656    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:23.262656    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:23.262656    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:23.262656    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:23.262656    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:23 GMT
	I0917 17:20:23.262955    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:23.696836    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:23.696836    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:23.696836    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:23.696836    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.703420    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:23.703485    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:23.703572    5292 round_trippers.go:580]     Audit-Id: 14f90ffb-2a34-4169-afc0-89f1e9747f87
	I0917 17:20:23.703572    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:23.703572    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:23.703572    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:23.703572    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:23.703572    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:23 GMT
	I0917 17:20:23.703984    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:23.704855    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:23.704931    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:23.704931    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:23.704931    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:23.714306    5292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 17:20:23.714306    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:23.714393    5292 round_trippers.go:580]     Audit-Id: fadbab04-f302-400c-bbe9-4868f7c7ba2d
	I0917 17:20:23.714393    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:23.714393    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:23.714393    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:23.714393    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:23.714468    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:23 GMT
	I0917 17:20:23.714753    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:24.001134    5292 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0917 17:20:24.001216    5292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0917 17:20:24.001265    5292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0917 17:20:24.001265    5292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0917 17:20:24.001265    5292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0917 17:20:24.001342    5292 command_runner.go:130] > pod/storage-provisioner configured
	I0917 17:20:24.001396    5292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.0718242s)
	I0917 17:20:24.001566    5292 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0917 17:20:24.001566    5292 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.2562672s)
	I0917 17:20:24.001914    5292 round_trippers.go:463] GET https://127.0.0.1:54907/apis/storage.k8s.io/v1/storageclasses
	I0917 17:20:24.001970    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.001970    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.002030    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.065398    5292 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0917 17:20:24.065398    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.065398    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.065398    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.065398    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.065398    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.065398    5292 round_trippers.go:580]     Content-Length: 1273
	I0917 17:20:24.065398    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.065398    5292 round_trippers.go:580]     Audit-Id: c72c9820-18bc-4999-9567-e68371162360
	I0917 17:20:24.065398    5292 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"493"},"items":[{"metadata":{"name":"standard","uid":"a5a6e4ad-9ad5-426a-bc1c-54a3ebb1e2a6","resourceVersion":"374","creationTimestamp":"2024-09-17T17:19:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-17T17:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0917 17:20:24.067624    5292 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a5a6e4ad-9ad5-426a-bc1c-54a3ebb1e2a6","resourceVersion":"374","creationTimestamp":"2024-09-17T17:19:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-17T17:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0917 17:20:24.067762    5292 round_trippers.go:463] PUT https://127.0.0.1:54907/apis/storage.k8s.io/v1/storageclasses/standard
	I0917 17:20:24.067762    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.067861    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.067861    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.067936    5292 round_trippers.go:473]     Content-Type: application/json
	I0917 17:20:24.079663    5292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 17:20:24.079663    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.079663    5292 round_trippers.go:580]     Audit-Id: 9ce38aca-a35f-41b1-b17a-2ea1a8607011
	I0917 17:20:24.079663    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.079663    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.079769    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.079769    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.079805    5292 round_trippers.go:580]     Content-Length: 1220
	I0917 17:20:24.079848    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.080059    5292 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"a5a6e4ad-9ad5-426a-bc1c-54a3ebb1e2a6","resourceVersion":"374","creationTimestamp":"2024-09-17T17:19:22Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-17T17:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0917 17:20:24.085620    5292 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0917 17:20:24.087884    5292 addons.go:510] duration metric: took 10.8016534s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0917 17:20:24.196892    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:24.197157    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.197157    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.197157    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.208526    5292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0917 17:20:24.208526    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.208526    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.208636    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.208636    5292 round_trippers.go:580]     Audit-Id: aa2a8f11-e3d7-4659-9ac6-3c1ddfeb4f60
	I0917 17:20:24.208676    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.208676    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.208676    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.209060    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:24.210412    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:24.210467    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.210467    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.210467    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.223895    5292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0917 17:20:24.224045    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.224045    5292 round_trippers.go:580]     Audit-Id: 8a519861-a917-4209-af88-8a1b6b7b7205
	I0917 17:20:24.224045    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.224045    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.224045    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.224045    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.224045    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.224491    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:24.696490    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:24.696570    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.696570    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.696570    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.703023    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:24.703023    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.703023    5292 round_trippers.go:580]     Audit-Id: 4d48e741-1138-4964-ad84-9bba497c4e85
	I0917 17:20:24.703023    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.703023    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.703023    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.703023    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.703023    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.703023    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:24.703708    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:24.703708    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:24.703708    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:24.703708    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:24.711281    5292 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:24.711352    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:24.711352    5292 round_trippers.go:580]     Audit-Id: 6669e20d-9ad2-4941-a663-448e78c24804
	I0917 17:20:24.711352    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:24.711352    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:24.711352    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:24.711352    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:24.711352    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:24 GMT
	I0917 17:20:24.711352    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:24.712089    5292 pod_ready.go:103] pod "kube-scheduler-functional-388800" in "kube-system" namespace has status "Ready":"False"
	I0917 17:20:25.196751    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:25.196751    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:25.196751    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:25.196751    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:25.203414    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:25.203462    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:25.203508    5292 round_trippers.go:580]     Audit-Id: 1aac2841-56d7-48b7-923c-e43a09175266
	I0917 17:20:25.203508    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:25.203508    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:25.203508    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:25.203508    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:25.203508    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:25 GMT
	I0917 17:20:25.205834    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:25.207680    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:25.207680    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:25.207766    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:25.207766    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:25.214205    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:25.214244    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:25.214244    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:25.214244    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:25.214315    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:25.214315    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:25 GMT
	I0917 17:20:25.214381    5292 round_trippers.go:580]     Audit-Id: ff882bfd-2227-4a92-87e6-1545c0acabff
	I0917 17:20:25.214381    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:25.215283    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:25.696417    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:25.696417    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:25.696417    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:25.696417    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:25.703569    5292 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:25.703569    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:25.703569    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:25.703569    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:25.703569    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:25.703569    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:25 GMT
	I0917 17:20:25.703569    5292 round_trippers.go:580]     Audit-Id: 2e6e3ec6-b885-4862-88e7-62ec685b3113
	I0917 17:20:25.703569    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:25.704103    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"469","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0917 17:20:25.704504    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:25.704504    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:25.704504    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:25.704504    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:25.710354    5292 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:25.710354    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:25.710354    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:25.710354    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:25.710354    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:25.710354    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:25.710354    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:25 GMT
	I0917 17:20:25.710354    5292 round_trippers.go:580]     Audit-Id: b653f027-67b5-4b67-9d8c-fcf94ecba19a
	I0917 17:20:25.710800    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:26.196829    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:26.196982    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.196982    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.196982    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.202953    5292 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:26.203023    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.203023    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.203023    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.203023    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.203023    5292 round_trippers.go:580]     Audit-Id: e94d577c-271c-466a-90ec-5ad2125e6123
	I0917 17:20:26.203023    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.203023    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.203693    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"561","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5440 chars]
	I0917 17:20:26.204501    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:26.204533    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.204533    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.204533    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.211940    5292 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0917 17:20:26.211940    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.211940    5292 round_trippers.go:580]     Audit-Id: 403368b7-a34e-4144-bca3-e4e51396d0e6
	I0917 17:20:26.211940    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.211940    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.211940    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.211940    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.211940    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.212547    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:26.696211    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800
	I0917 17:20:26.696211    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.696211    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.696211    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.702837    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:26.702837    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.702837    5292 round_trippers.go:580]     Audit-Id: b4cf714d-4229-4d4f-92ad-4c3d569b2f89
	I0917 17:20:26.702837    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.702837    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.702837    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.702837    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.702837    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.702837    5292 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-388800","namespace":"kube-system","uid":"7db13cf4-b972-4b28-8977-09eaeb97848a","resourceVersion":"562","creationTimestamp":"2024-09-17T17:19:13Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.mirror":"7c8b231580dc6338ed4584dfa9c7db23","kubernetes.io/config.seen":"2024-09-17T17:19:06.999278960Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0917 17:20:26.703483    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes/functional-388800
	I0917 17:20:26.703483    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.703483    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.703483    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.710226    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:26.710226    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.710226    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.710226    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.710226    5292 round_trippers.go:580]     Audit-Id: 534675e5-683e-4a6f-b4b9-3fbacc57188b
	I0917 17:20:26.710226    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.710226    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.710226    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.710226    5292 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-17T17:19:12Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0917 17:20:26.711329    5292 pod_ready.go:93] pod "kube-scheduler-functional-388800" in "kube-system" namespace has status "Ready":"True"
	I0917 17:20:26.711329    5292 pod_ready.go:82] duration metric: took 4.0152973s for pod "kube-scheduler-functional-388800" in "kube-system" namespace to be "Ready" ...
	I0917 17:20:26.711329    5292 pod_ready.go:39] duration metric: took 4.8384704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:20:26.711329    5292 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:20:26.727567    5292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:20:26.758275    5292 command_runner.go:130] > 6090
	I0917 17:20:26.758275    5292 api_server.go:72] duration metric: took 13.4720208s to wait for apiserver process to appear ...
	I0917 17:20:26.758275    5292 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:20:26.758275    5292 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54907/healthz ...
	I0917 17:20:26.775123    5292 api_server.go:279] https://127.0.0.1:54907/healthz returned 200:
	ok
	I0917 17:20:26.775123    5292 round_trippers.go:463] GET https://127.0.0.1:54907/version
	I0917 17:20:26.775123    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.775123    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.775123    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.778403    5292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0917 17:20:26.778774    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.778774    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.778774    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.778774    5292 round_trippers.go:580]     Content-Length: 263
	I0917 17:20:26.778774    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.778857    5292 round_trippers.go:580]     Audit-Id: f036952d-f7ae-4e3c-8103-3d3747809c96
	I0917 17:20:26.778857    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.778857    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.778857    5292 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0917 17:20:26.779031    5292 api_server.go:141] control plane version: v1.31.1
	I0917 17:20:26.779130    5292 api_server.go:131] duration metric: took 20.8548ms to wait for apiserver health ...
	I0917 17:20:26.779130    5292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:20:26.779336    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods
	I0917 17:20:26.779368    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.779368    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.779368    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.785066    5292 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:26.785066    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.785066    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.785066    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.785066    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.785066    5292 round_trippers.go:580]     Audit-Id: 59c89c30-65a8-4590-aef6-368451955142
	I0917 17:20:26.785066    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.785066    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.787648    5292 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"562"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bsr8x","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"257a7451-7022-4de7-bb4c-485d3c48dac3","resourceVersion":"473","creationTimestamp":"2024-09-17T17:19:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"25cb4a1d-e859-4d18-a9ed-50f43997ac7c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25cb4a1d-e859-4d18-a9ed-50f43997ac7c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53397 chars]
	I0917 17:20:26.793281    5292 system_pods.go:59] 7 kube-system pods found
	I0917 17:20:26.793281    5292 system_pods.go:61] "coredns-7c65d6cfc9-bsr8x" [257a7451-7022-4de7-bb4c-485d3c48dac3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 17:20:26.793281    5292 system_pods.go:61] "etcd-functional-388800" [b35932f8-bbb1-432d-b941-eca0784289c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 17:20:26.793834    5292 system_pods.go:61] "kube-apiserver-functional-388800" [cca55773-f804-4852-82ef-cbefb2803abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 17:20:26.793834    5292 system_pods.go:61] "kube-controller-manager-functional-388800" [41e00752-31f2-4aab-8fc5-8e16af69743e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 17:20:26.793834    5292 system_pods.go:61] "kube-proxy-6f5gv" [b9e528f6-ab8e-4843-87ee-3df1231076c1] Running
	I0917 17:20:26.793953    5292 system_pods.go:61] "kube-scheduler-functional-388800" [7db13cf4-b972-4b28-8977-09eaeb97848a] Running
	I0917 17:20:26.793953    5292 system_pods.go:61] "storage-provisioner" [5bef4a9d-5cf0-4ce9-834f-e7696b69f361] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 17:20:26.794028    5292 system_pods.go:74] duration metric: took 14.8983ms to wait for pod list to return data ...
	I0917 17:20:26.794048    5292 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:20:26.794148    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/default/serviceaccounts
	I0917 17:20:26.794239    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.794239    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.794239    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.800564    5292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0917 17:20:26.800564    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.800564    5292 round_trippers.go:580]     Audit-Id: 1aaa79bd-0728-4230-b23c-7a1ca63431d0
	I0917 17:20:26.800564    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.800564    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.800564    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.800564    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.800564    5292 round_trippers.go:580]     Content-Length: 261
	I0917 17:20:26.801091    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.801091    5292 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"562"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"71a9106c-330a-4df9-9b3f-f1a344869218","resourceVersion":"328","creationTimestamp":"2024-09-17T17:19:20Z"}}]}
	I0917 17:20:26.801457    5292 default_sa.go:45] found service account: "default"
	I0917 17:20:26.801543    5292 default_sa.go:55] duration metric: took 7.4941ms for default service account to be created ...
	I0917 17:20:26.801543    5292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:20:26.801820    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/namespaces/kube-system/pods
	I0917 17:20:26.801820    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.801879    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.801879    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.810931    5292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0917 17:20:26.810987    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.810987    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.810987    5292 round_trippers.go:580]     Audit-Id: 4740803b-addf-41f0-8e99-768fb7f4971b
	I0917 17:20:26.810987    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.810987    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.810987    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.810987    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.811733    5292 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"562"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-bsr8x","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"257a7451-7022-4de7-bb4c-485d3c48dac3","resourceVersion":"473","creationTimestamp":"2024-09-17T17:19:21Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"25cb4a1d-e859-4d18-a9ed-50f43997ac7c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-17T17:19:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25cb4a1d-e859-4d18-a9ed-50f43997ac7c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53397 chars]
	I0917 17:20:26.814388    5292 system_pods.go:86] 7 kube-system pods found
	I0917 17:20:26.814448    5292 system_pods.go:89] "coredns-7c65d6cfc9-bsr8x" [257a7451-7022-4de7-bb4c-485d3c48dac3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 17:20:26.814448    5292 system_pods.go:89] "etcd-functional-388800" [b35932f8-bbb1-432d-b941-eca0784289c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0917 17:20:26.814535    5292 system_pods.go:89] "kube-apiserver-functional-388800" [cca55773-f804-4852-82ef-cbefb2803abd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0917 17:20:26.814535    5292 system_pods.go:89] "kube-controller-manager-functional-388800" [41e00752-31f2-4aab-8fc5-8e16af69743e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0917 17:20:26.814535    5292 system_pods.go:89] "kube-proxy-6f5gv" [b9e528f6-ab8e-4843-87ee-3df1231076c1] Running
	I0917 17:20:26.814535    5292 system_pods.go:89] "kube-scheduler-functional-388800" [7db13cf4-b972-4b28-8977-09eaeb97848a] Running
	I0917 17:20:26.814535    5292 system_pods.go:89] "storage-provisioner" [5bef4a9d-5cf0-4ce9-834f-e7696b69f361] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 17:20:26.814535    5292 system_pods.go:126] duration metric: took 12.9918ms to wait for k8s-apps to be running ...
	I0917 17:20:26.814605    5292 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:20:26.827253    5292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:20:26.852039    5292 system_svc.go:56] duration metric: took 37.4338ms WaitForService to wait for kubelet
	I0917 17:20:26.852039    5292 kubeadm.go:582] duration metric: took 13.5657841s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:20:26.852039    5292 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:20:26.853045    5292 round_trippers.go:463] GET https://127.0.0.1:54907/api/v1/nodes
	I0917 17:20:26.853045    5292 round_trippers.go:469] Request Headers:
	I0917 17:20:26.853045    5292 round_trippers.go:473]     Accept: application/json, */*
	I0917 17:20:26.853045    5292 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0917 17:20:26.859038    5292 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0917 17:20:26.859038    5292 round_trippers.go:577] Response Headers:
	I0917 17:20:26.859038    5292 round_trippers.go:580]     Audit-Id: 3c6ad39b-19cb-47d3-8d1c-d6a223840d35
	I0917 17:20:26.859038    5292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0917 17:20:26.859038    5292 round_trippers.go:580]     Content-Type: application/json
	I0917 17:20:26.859038    5292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: d613359a-1d15-4c1f-be27-83aaf7060638
	I0917 17:20:26.859038    5292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8a8b0f05-03f8-4625-bd53-50186a6a3977
	I0917 17:20:26.859038    5292 round_trippers.go:580]     Date: Tue, 17 Sep 2024 17:20:26 GMT
	I0917 17:20:26.859038    5292 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"562"},"items":[{"metadata":{"name":"functional-388800","uid":"8dada93f-7610-4544-bc0c-405dcea73ea3","resourceVersion":"425","creationTimestamp":"2024-09-17T17:19:12Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-388800","kubernetes.io/os":"linux","minikube.k8s.io/commit":"825de77780746e57a7948604e1eea9da920a46ce","minikube.k8s.io/name":"functional-388800","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_17T17_19_17_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0917 17:20:26.859038    5292 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0917 17:20:26.859038    5292 node_conditions.go:123] node cpu capacity is 16
	I0917 17:20:26.859038    5292 node_conditions.go:105] duration metric: took 6.9992ms to run NodePressure ...
	I0917 17:20:26.859038    5292 start.go:241] waiting for startup goroutines ...
	I0917 17:20:26.859038    5292 start.go:246] waiting for cluster config update ...
	I0917 17:20:26.860047    5292 start.go:255] writing updated cluster config ...
	I0917 17:20:26.878217    5292 ssh_runner.go:195] Run: rm -f paused
	I0917 17:20:27.023412    5292 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 17:20:27.027441    5292 out.go:177] * Done! kubectl is now configured to use "functional-388800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 17 17:20:09 functional-388800 systemd[1]: Started Docker Application Container Engine.
	Sep 17 17:20:09 functional-388800 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 17 17:20:09 functional-388800 systemd[1]: cri-docker.service: Deactivated successfully.
	Sep 17 17:20:09 functional-388800 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 17 17:20:10 functional-388800 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Start docker client with request timeout 0s"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Loaded network plugin cni"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 17 17:20:10 functional-388800 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 17 17:20:10 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:10Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-bsr8x_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"9f96ebe1dbf1fea785b55c6b6091c320b850454839dfcf33e8c795a68512041d\""
	Sep 17 17:20:14 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/69af6c4322b4797649705aa6cce8f67c3dbb91fbd7fcfc81a00ab31ad6b5fa8a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1c99ec2d26f93f2aea9884ff0770f8cbcd256caeb3b66a9892626059f90da5dd/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2358096e5cf965506e510a62e3da02eeb6ae5f389d52181a892b1be2e35761e7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/43fdee99846c3130b4f9a01c5da4e475c8140738963c6c6e24b3c53f9e1eb0ff/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df2ef161a8ace811fa893b0702961b8e2d3ac3116159182d22057c7cfa1cef44/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/591debc833c0f6867b1b704ebcceabce52c6b2be8dde1d1d57e90dc0f7e2deea/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 17 17:20:15 functional-388800 dockerd[4699]: time="2024-09-17T17:20:15.976265427Z" level=info msg="ignoring event" container=c622a98ba5a533b40a91975eef91ef8ad6f7b74d425b5e0f8f701a44674e32a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 17 17:20:16 functional-388800 cri-dockerd[4989]: time="2024-09-17T17:20:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3d1b5f63a5cf36332e171bfef2bddd1a9ee7360ade28a56ba4a598dc0df9e74d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7c401d6ee9438       6e38f40d628db       1 second ago         Running             storage-provisioner       3                   69af6c4322b47       storage-provisioner
	ab31b7fa53f82       c69fa2e9cbf5f       31 seconds ago       Running             coredns                   1                   3d1b5f63a5cf3       coredns-7c65d6cfc9-bsr8x
	2a46ae08cbf37       9aa1fad941575       32 seconds ago       Running             kube-scheduler            1                   591debc833c0f       kube-scheduler-functional-388800
	0d380d85e224e       60c005f310ff3       32 seconds ago       Running             kube-proxy                1                   df2ef161a8ace       kube-proxy-6f5gv
	1eff4eff8444b       6bab7719df100       32 seconds ago       Running             kube-apiserver            1                   43fdee99846c3       kube-apiserver-functional-388800
	39e3a0c666dee       2e96e5913fc06       32 seconds ago       Running             etcd                      1                   2358096e5cf96       etcd-functional-388800
	fbd5ca526c979       175ffd71cce3d       32 seconds ago       Running             kube-controller-manager   1                   1c99ec2d26f93       kube-controller-manager-functional-388800
	c622a98ba5a53       6e38f40d628db       33 seconds ago       Exited              storage-provisioner       2                   69af6c4322b47       storage-provisioner
	932fb437ed8b4       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   9f96ebe1dbf1f       coredns-7c65d6cfc9-bsr8x
	83e91adb623d9       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   837a83df02847       kube-scheduler-functional-388800
	
	
	==> coredns [932fb437ed8b] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[332325097]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:19:24.872) (total time: 21016ms):
	Trace[332325097]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21015ms (17:19:45.884)
	Trace[332325097]: [21.016953984s] [21.016953984s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[434317016]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:19:24.872) (total time: 21016ms):
	Trace[434317016]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21014ms (17:19:45.884)
	Trace[434317016]: [21.016580137s] [21.016580137s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[437806240]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (17-Sep-2024 17:19:24.872) (total time: 21045ms):
	Trace[437806240]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21045ms (17:19:45.915)
	Trace[437806240]: [21.04549991s] [21.04549991s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:38561 - 3040 "HINFO IN 7386156513793694885.7014124682886851026. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032217807s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ab31b7fa53f8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43650 - 23566 "HINFO IN 2639629526801123458.2307588592488381915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.098522688s
	
	
	==> describe nodes <==
	Name:               functional-388800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-388800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=functional-388800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_19_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:19:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-388800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:19:27 +0000   Tue, 17 Sep 2024 17:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:19:27 +0000   Tue, 17 Sep 2024 17:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:19:27 +0000   Tue, 17 Sep 2024 17:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:19:27 +0000   Tue, 17 Sep 2024 17:19:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-388800
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868684Ki
	  pods:               110
	System Info:
	  Machine ID:                 9ff9b93f87a54108b04dddf3f1a363ad
	  System UUID:                9ff9b93f87a54108b04dddf3f1a363ad
	  Boot ID:                    4eef06a3-6868-4ec2-9bef-e08441d95637
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-bsr8x                     100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-functional-388800                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         91s
	  kube-system                 kube-apiserver-functional-388800             250m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-functional-388800    200m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-6f5gv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-functional-388800             100m (0%)     0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age                  From             Message
	  ----     ------                             ----                 ----             -------
	  Normal   Starting                           82s                  kube-proxy       
	  Normal   Starting                           24s                  kube-proxy       
	  Normal   NodeHasSufficientMemory            100s (x7 over 100s)  kubelet          Node functional-388800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              100s (x7 over 100s)  kubelet          Node functional-388800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               100s (x7 over 100s)  kubelet          Node functional-388800 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced            100s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                           91s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                           91s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced            91s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            91s                  kubelet          Node functional-388800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              91s                  kubelet          Node functional-388800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               91s                  kubelet          Node functional-388800 status is now: NodeHasSufficientPID
	  Warning  PossibleMemoryBackedVolumesOnDisk  91s                  kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   RegisteredNode                     87s                  node-controller  Node functional-388800 event: Registered Node functional-388800 in Controller
	  Normal   NodeNotReady                       39s                  kubelet          Node functional-388800 status is now: NodeNotReady
	  Normal   RegisteredNode                     22s                  node-controller  Node functional-388800 event: Registered Node functional-388800 in Controller
	
	
	==> dmesg <==
	[  +0.475040] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +2.535593] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002693] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002713] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003812] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000004]  failed 2
	[  +0.007000] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002357] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004204] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.003007] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.072558] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.117094] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.938742] netlink: 'init': attribute type 4 has an invalid length.
	[Sep17 16:32] tmpfs: Unknown parameter 'noswap'
	[ +15.280320] tmpfs: Unknown parameter 'noswap'
	[Sep17 16:59] tmpfs: Unknown parameter 'noswap'
	[  +9.522523] tmpfs: Unknown parameter 'noswap'
	[Sep17 17:17] tmpfs: Unknown parameter 'noswap'
	[ +10.033679] tmpfs: Unknown parameter 'noswap'
	[ +14.665033] tmpfs: Unknown parameter 'noswap'
	[Sep17 17:19] tmpfs: Unknown parameter 'noswap'
	[  +9.405000] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [39e3a0c666de] <==
	{"level":"info","ts":"2024-09-17T17:20:17.674982Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:20:17.675743Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:20:17.675901Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:20:17.682418Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T17:20:17.682966Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T17:20:17.683027Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T17:20:17.683107Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:20:17.683142Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:20:19.176964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-17T17:20:19.177137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-17T17:20:19.177192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T17:20:19.177212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-17T17:20:19.177220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T17:20:19.177232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-17T17:20:19.177241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-17T17:20:19.188180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-388800 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:20:19.188234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:20:19.188519Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:20:19.189706Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:20:19.189816Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:20:19.189922Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:20:19.190286Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:20:19.191601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:20:19.192050Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-17T17:20:22.374278Z","caller":"traceutil/trace.go:171","msg":"trace[306095041] transaction","detail":"{read_only:false; response_revision:461; number_of_response:1; }","duration":"104.861302ms","start":"2024-09-17T17:20:22.269397Z","end":"2024-09-17T17:20:22.374259Z","steps":["trace[306095041] 'process raft request'  (duration: 104.362341ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:20:47 up  2:18,  0 users,  load average: 1.36, 1.44, 1.42
	Linux functional-388800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [1eff4eff8444] <==
	I0917 17:20:21.833983       1 controller.go:142] Starting OpenAPI controller
	I0917 17:20:21.833994       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0917 17:20:21.834381       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0917 17:20:21.834485       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0917 17:20:21.959975       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0917 17:20:21.960117       1 shared_informer.go:320] Caches are synced for configmaps
	I0917 17:20:21.960193       1 aggregator.go:171] initial CRD sync complete...
	I0917 17:20:21.960198       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0917 17:20:21.960204       1 autoregister_controller.go:144] Starting autoregister controller
	I0917 17:20:21.960210       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0917 17:20:21.960215       1 cache.go:39] Caches are synced for autoregister controller
	I0917 17:20:21.960316       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0917 17:20:21.960328       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0917 17:20:21.960780       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0917 17:20:21.961398       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0917 17:20:21.961467       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0917 17:20:22.060255       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0917 17:20:22.060439       1 policy_source.go:224] refreshing policies
	I0917 17:20:22.066325       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0917 17:20:22.068294       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0917 17:20:22.160219       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0917 17:20:22.163806       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0917 17:20:22.868980       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0917 17:20:25.324519       1 controller.go:615] quota admission added evaluator for: endpoints
	I0917 17:20:25.624009       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [fbd5ca526c97] <==
	I0917 17:20:25.322638       1 shared_informer.go:320] Caches are synced for PVC protection
	I0917 17:20:25.322836       1 shared_informer.go:320] Caches are synced for PV protection
	I0917 17:20:25.326071       1 shared_informer.go:320] Caches are synced for deployment
	I0917 17:20:25.327201       1 shared_informer.go:320] Caches are synced for node
	I0917 17:20:25.327327       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0917 17:20:25.327556       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0917 17:20:25.327646       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0917 17:20:25.327655       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0917 17:20:25.327700       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-388800"
	I0917 17:20:25.328032       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0917 17:20:25.369577       1 shared_informer.go:320] Caches are synced for disruption
	I0917 17:20:25.416970       1 shared_informer.go:320] Caches are synced for stateful set
	I0917 17:20:25.441248       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:20:25.476964       1 shared_informer.go:320] Caches are synced for crt configmap
	I0917 17:20:25.480133       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0917 17:20:25.503445       1 shared_informer.go:320] Caches are synced for namespace
	I0917 17:20:25.513159       1 shared_informer.go:320] Caches are synced for resource quota
	I0917 17:20:25.536420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="265.95198ms"
	I0917 17:20:25.536899       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.906µs"
	I0917 17:20:25.577129       1 shared_informer.go:320] Caches are synced for service account
	I0917 17:20:25.965388       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:20:26.019863       1 shared_informer.go:320] Caches are synced for garbage collector
	I0917 17:20:26.019957       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0917 17:20:29.337931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.406312ms"
	I0917 17:20:29.338390       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.405µs"
	
	
	==> kube-proxy [0d380d85e224] <==
	E0917 17:20:17.860161       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0917 17:20:17.959871       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0917 17:20:18.063096       1 server_linux.go:66] "Using iptables proxy"
	I0917 17:20:22.069553       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 17:20:22.069723       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:20:22.482928       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 17:20:22.483099       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:20:22.561357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0917 17:20:22.587251       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0917 17:20:22.660041       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0917 17:20:22.661144       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:20:22.661289       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:20:22.667088       1 config.go:328] "Starting node config controller"
	I0917 17:20:22.667386       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:20:22.667092       1 config.go:199] "Starting service config controller"
	I0917 17:20:22.667417       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:20:22.667210       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:20:22.668307       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:20:22.769202       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:20:22.769102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0917 17:20:22.769416       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [2a46ae08cbf3] <==
	I0917 17:20:19.276210       1 serving.go:386] Generated self-signed cert in-memory
	I0917 17:20:22.165539       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0917 17:20:22.165930       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:20:22.183255       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0917 17:20:22.183481       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0917 17:20:22.183502       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0917 17:20:22.183530       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0917 17:20:22.186191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 17:20:22.186219       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0917 17:20:22.186234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0917 17:20:22.186352       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0917 17:20:22.284643       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0917 17:20:22.287402       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0917 17:20:22.287441       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [83e91adb623d] <==
	E0917 17:19:13.672046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:13.698196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0917 17:19:13.698295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:13.715364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0917 17:19:13.715465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:13.723097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:19:13.723206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:13.810077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:19:13.810377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:13.938616       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0917 17:19:13.938769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.036542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:19:14.036648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.040908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0917 17:19:14.041020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.112246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0917 17:19:14.112376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.128509       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:19:14.128636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.212563       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:19:14.212685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:19:14.246565       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:19:14.246701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0917 17:19:16.068170       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0917 17:19:57.265525       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 17 17:20:16 functional-388800 kubelet[2595]: E0917 17:20:16.771741    2595 kuberuntime_manager.go:1599] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8860f9780074c5f21f5d935d1cd1550a72a68c7140285efac056ad61278c7c0e" pod="kube-system/kube-controller-manager-functional-388800"
	Sep 17 17:20:16 functional-388800 kubelet[2595]: E0917 17:20:16.771784    2595 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8860f9780074c5f21f5d935d1cd1550a72a68c7140285efac056ad61278c7c0e" pod="kube-system/kube-controller-manager-functional-388800"
	Sep 17 17:20:16 functional-388800 kubelet[2595]: I0917 17:20:16.969307    2595 scope.go:117] "RemoveContainer" containerID="4890f5b9d8196703ec32f758e17a17fa76b7fc811b4a946505580ab5fe28d6ea"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: E0917 17:20:17.174592    2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-388800?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: I0917 17:20:17.260934    2595 scope.go:117] "RemoveContainer" containerID="8c489a7761ecf657f893e8be4507c775dfdb5e7ede562f41eb12130a3a789966"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: I0917 17:20:17.561068    2595 scope.go:117] "RemoveContainer" containerID="cb501ab378853df34695cb6f2ebe739abbcb5d1adb846f47fb1fde86dbb8f0c4"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: E0917 17:20:17.869991    2595 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8c489a7761ecf657f893e8be4507c775dfdb5e7ede562f41eb12130a3a789966" containerID="8c489a7761ecf657f893e8be4507c775dfdb5e7ede562f41eb12130a3a789966"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: E0917 17:20:17.870084    2595 kuberuntime_manager.go:1599] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8c489a7761ecf657f893e8be4507c775dfdb5e7ede562f41eb12130a3a789966" pod="kube-system/kube-apiserver-functional-388800"
	Sep 17 17:20:17 functional-388800 kubelet[2595]: E0917 17:20:17.870123    2595 generic.go:453] "PLEG: Write status" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8c489a7761ecf657f893e8be4507c775dfdb5e7ede562f41eb12130a3a789966" pod="kube-system/kube-apiserver-functional-388800"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.064296    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591debc833c0f6867b1b704ebcceabce52c6b2be8dde1d1d57e90dc0f7e2deea"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.164507    2595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2358096e5cf965506e510a62e3da02eeb6ae5f389d52181a892b1be2e35761e7"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.165809    2595 status_manager.go:851] "Failed to get status for pod" podUID="257a7451-7022-4de7-bb4c-485d3c48dac3" pod="kube-system/coredns-7c65d6cfc9-bsr8x" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bsr8x\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.166581    2595 status_manager.go:851] "Failed to get status for pod" podUID="5bef4a9d-5cf0-4ce9-834f-e7696b69f361" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.167289    2595 status_manager.go:851] "Failed to get status for pod" podUID="7c8b231580dc6338ed4584dfa9c7db23" pod="kube-system/kube-scheduler-functional-388800" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-388800\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.168028    2595 status_manager.go:851] "Failed to get status for pod" podUID="648f7e55302d976a4ced2ee4a7d51746" pod="kube-system/kube-controller-manager-functional-388800" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-388800\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.169754    2595 status_manager.go:851] "Failed to get status for pod" podUID="2498186d0fb55daa98403d4ad1c3d45c" pod="kube-system/kube-apiserver-functional-388800" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-388800\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.171279    2595 status_manager.go:851] "Failed to get status for pod" podUID="d848d70a3305f4c0b644e3d19c8db1e7" pod="kube-system/etcd-functional-388800" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-388800\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:18 functional-388800 kubelet[2595]: I0917 17:20:18.174097    2595 status_manager.go:851] "Failed to get status for pod" podUID="b9e528f6-ab8e-4843-87ee-3df1231076c1" pod="kube-system/kube-proxy-6f5gv" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-6f5gv\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 17 17:20:19 functional-388800 kubelet[2595]: I0917 17:20:19.187785    2595 scope.go:117] "RemoveContainer" containerID="c622a98ba5a533b40a91975eef91ef8ad6f7b74d425b5e0f8f701a44674e32a8"
	Sep 17 17:20:19 functional-388800 kubelet[2595]: E0917 17:20:19.188172    2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bef4a9d-5cf0-4ce9-834f-e7696b69f361)\"" pod="kube-system/storage-provisioner" podUID="5bef4a9d-5cf0-4ce9-834f-e7696b69f361"
	Sep 17 17:20:20 functional-388800 kubelet[2595]: I0917 17:20:20.502078    2595 scope.go:117] "RemoveContainer" containerID="c622a98ba5a533b40a91975eef91ef8ad6f7b74d425b5e0f8f701a44674e32a8"
	Sep 17 17:20:20 functional-388800 kubelet[2595]: E0917 17:20:20.502398    2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bef4a9d-5cf0-4ce9-834f-e7696b69f361)\"" pod="kube-system/storage-provisioner" podUID="5bef4a9d-5cf0-4ce9-834f-e7696b69f361"
	Sep 17 17:20:32 functional-388800 kubelet[2595]: I0917 17:20:32.474700    2595 scope.go:117] "RemoveContainer" containerID="c622a98ba5a533b40a91975eef91ef8ad6f7b74d425b5e0f8f701a44674e32a8"
	Sep 17 17:20:32 functional-388800 kubelet[2595]: E0917 17:20:32.475434    2595 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5bef4a9d-5cf0-4ce9-834f-e7696b69f361)\"" pod="kube-system/storage-provisioner" podUID="5bef4a9d-5cf0-4ce9-834f-e7696b69f361"
	Sep 17 17:20:46 functional-388800 kubelet[2595]: I0917 17:20:46.486122    2595 scope.go:117] "RemoveContainer" containerID="c622a98ba5a533b40a91975eef91ef8ad6f7b74d425b5e0f8f701a44674e32a8"
	
	
	==> storage-provisioner [7c401d6ee943] <==
	I0917 17:20:47.006838       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 17:20:47.066413       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 17:20:47.066535       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c622a98ba5a5] <==
	I0917 17:20:15.878832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0917 17:20:15.885686       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-388800 -n functional-388800
E0917 17:20:49.659903    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context functional-388800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.32s)

                                                
                                    

Test pass (313/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.3
4 TestDownloadOnly/v1.20.0/preload-exists 0.09
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.35
9 TestDownloadOnly/v1.20.0/DeleteAll 1.35
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.88
12 TestDownloadOnly/v1.31.1/json-events 8.28
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.28
18 TestDownloadOnly/v1.31.1/DeleteAll 1.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.69
20 TestDownloadOnlyKic 3.35
21 TestBinaryMirror 2.97
22 TestOffline 125.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.3
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
27 TestAddons/Setup 566.37
29 TestAddons/serial/Volcano 57.79
31 TestAddons/serial/GCPAuth/Namespaces 0.36
35 TestAddons/parallel/InspektorGadget 13.79
36 TestAddons/parallel/MetricsServer 7.69
37 TestAddons/parallel/HelmTiller 14.88
39 TestAddons/parallel/CSI 63.67
40 TestAddons/parallel/Headlamp 31.96
41 TestAddons/parallel/CloudSpanner 6.76
42 TestAddons/parallel/LocalPath 64.5
43 TestAddons/parallel/NvidiaDevicePlugin 7.16
44 TestAddons/parallel/Yakd 13.84
45 TestAddons/StoppedEnableDisable 13.91
46 TestCertOptions 103.05
47 TestCertExpiration 303.19
48 TestDockerFlags 72.1
49 TestForceSystemdFlag 111.69
50 TestForceSystemdEnv 78.2
57 TestErrorSpam/start 4.02
58 TestErrorSpam/status 2.81
59 TestErrorSpam/pause 3.34
60 TestErrorSpam/unpause 3.58
61 TestErrorSpam/stop 19.45
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 93.92
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.07
68 TestFunctional/serial/KubeContext 0.14
69 TestFunctional/serial/KubectlGetPods 0.37
72 TestFunctional/serial/CacheCmd/cache/add_remote 6.71
73 TestFunctional/serial/CacheCmd/cache/add_local 3.72
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
75 TestFunctional/serial/CacheCmd/cache/list 0.26
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.86
77 TestFunctional/serial/CacheCmd/cache/cache_reload 4.16
78 TestFunctional/serial/CacheCmd/cache/delete 0.53
79 TestFunctional/serial/MinikubeKubectlCmd 0.52
81 TestFunctional/serial/ExtraConfig 75.96
82 TestFunctional/serial/ComponentHealth 0.2
83 TestFunctional/serial/LogsCmd 2.45
84 TestFunctional/serial/LogsFileCmd 2.54
85 TestFunctional/serial/InvalidService 5.95
87 TestFunctional/parallel/ConfigCmd 1.75
89 TestFunctional/parallel/DryRun 2.53
90 TestFunctional/parallel/InternationalLanguage 1.01
91 TestFunctional/parallel/StatusCmd 3.16
96 TestFunctional/parallel/AddonsCmd 0.68
97 TestFunctional/parallel/PersistentVolumeClaim 114.69
99 TestFunctional/parallel/SSHCmd 1.71
100 TestFunctional/parallel/CpCmd 5.22
101 TestFunctional/parallel/MySQL 75.3
102 TestFunctional/parallel/FileSync 0.75
103 TestFunctional/parallel/CertSync 4.47
107 TestFunctional/parallel/NodeLabels 0.22
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.94
111 TestFunctional/parallel/License 3.86
112 TestFunctional/parallel/ProfileCmd/profile_not_create 1.45
113 TestFunctional/parallel/ProfileCmd/profile_list 1.28
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.98
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.59
119 TestFunctional/parallel/ProfileCmd/profile_json_output 1.31
120 TestFunctional/parallel/ServiceCmd/DeployApp 25.59
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.17
126 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
127 TestFunctional/parallel/DockerEnv/powershell 8.58
128 TestFunctional/parallel/ServiceCmd/List 1.16
129 TestFunctional/parallel/Version/short 0.28
130 TestFunctional/parallel/Version/components 1.73
131 TestFunctional/parallel/ServiceCmd/JSONOutput 1.15
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.71
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.65
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.66
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.69
136 TestFunctional/parallel/ImageCommands/ImageBuild 10.38
137 TestFunctional/parallel/ImageCommands/Setup 2.07
138 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.05
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.19
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.04
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.06
143 TestFunctional/parallel/ImageCommands/ImageRemove 1.69
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.02
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.24
146 TestFunctional/parallel/ServiceCmd/Format 15.01
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.47
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.5
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.49
150 TestFunctional/parallel/ServiceCmd/URL 15.02
151 TestFunctional/delete_echo-server_images 0.2
152 TestFunctional/delete_my-image_image 0.09
153 TestFunctional/delete_minikube_cached_images 0.09
157 TestMultiControlPlane/serial/StartCluster 209.66
158 TestMultiControlPlane/serial/DeployApp 16.08
159 TestMultiControlPlane/serial/PingHostFromPods 3.77
160 TestMultiControlPlane/serial/AddWorkerNode 55.48
161 TestMultiControlPlane/serial/NodeLabels 0.23
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.38
163 TestMultiControlPlane/serial/CopyFile 49.94
164 TestMultiControlPlane/serial/StopSecondaryNode 14.2
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.73
166 TestMultiControlPlane/serial/RestartSecondaryNode 86.07
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.14
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 279.06
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.35
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.55
171 TestMultiControlPlane/serial/StopCluster 36.6
172 TestMultiControlPlane/serial/RestartCluster 100.22
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.65
174 TestMultiControlPlane/serial/AddSecondaryNode 80.43
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.29
178 TestImageBuild/serial/Setup 65.1
179 TestImageBuild/serial/NormalBuild 5.73
180 TestImageBuild/serial/BuildWithBuildArg 2.75
181 TestImageBuild/serial/BuildWithDockerIgnore 1.63
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.72
186 TestJSONOutput/start/Command 99.69
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 1.41
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 1.26
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 12.59
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 1.02
211 TestKicCustomNetwork/create_custom_network 73.22
212 TestKicCustomNetwork/use_default_bridge_network 71.45
213 TestKicExistingNetwork 72.73
214 TestKicCustomSubnet 70.88
215 TestKicStaticIP 72.61
216 TestMainNoArgs 0.25
217 TestMinikubeProfile 140.32
220 TestMountStart/serial/StartWithMountFirst 19.51
221 TestMountStart/serial/VerifyMountFirst 0.8
222 TestMountStart/serial/StartWithMountSecond 17.97
223 TestMountStart/serial/VerifyMountSecond 0.76
224 TestMountStart/serial/DeleteFirst 2.78
225 TestMountStart/serial/VerifyMountPostDelete 0.76
226 TestMountStart/serial/Stop 2.05
227 TestMountStart/serial/RestartStopped 12.84
228 TestMountStart/serial/VerifyMountPostStop 0.76
231 TestMultiNode/serial/FreshStart2Nodes 150.39
232 TestMultiNode/serial/DeployApp2Nodes 38.51
233 TestMultiNode/serial/PingHostFrom2Pods 2.64
234 TestMultiNode/serial/AddNode 50.41
235 TestMultiNode/serial/MultiNodeLabels 0.19
236 TestMultiNode/serial/ProfileList 0.94
237 TestMultiNode/serial/CopyFile 27.87
238 TestMultiNode/serial/StopNode 5.1
239 TestMultiNode/serial/StartAfterStop 18.94
240 TestMultiNode/serial/RestartKeepsNodes 124.43
241 TestMultiNode/serial/DeleteNode 10.29
242 TestMultiNode/serial/StopMultiNode 24.38
243 TestMultiNode/serial/RestartMultiNode 70.35
244 TestMultiNode/serial/ValidateNameConflict 67.53
248 TestPreload 169.75
249 TestScheduledStopWindows 134.47
253 TestInsufficientStorage 43.08
254 TestRunningBinaryUpgrade 199.05
256 TestKubernetesUpgrade 517.96
257 TestMissingContainerUpgrade 300.97
259 TestStoppedBinaryUpgrade/Setup 0.88
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
261 TestNoKubernetes/serial/StartWithK8s 104.96
262 TestStoppedBinaryUpgrade/Upgrade 346.42
263 TestNoKubernetes/serial/StartWithStopK8s 32.38
264 TestNoKubernetes/serial/Start 33.92
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.77
266 TestNoKubernetes/serial/ProfileList 3.19
267 TestNoKubernetes/serial/Stop 6.71
268 TestNoKubernetes/serial/StartNoArgs 26.8
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.81
277 TestStoppedBinaryUpgrade/MinikubeLogs 3.06
290 TestPause/serial/Start 107.93
291 TestPause/serial/SecondStartNoReconfiguration 46.44
292 TestPause/serial/Pause 1.59
293 TestPause/serial/VerifyStatus 0.94
294 TestPause/serial/Unpause 1.38
295 TestPause/serial/PauseAgain 1.83
296 TestPause/serial/DeletePaused 5
297 TestPause/serial/VerifyDeletedResources 4.31
299 TestStartStop/group/old-k8s-version/serial/FirstStart 240.11
301 TestStartStop/group/no-preload/serial/FirstStart 131.39
303 TestStartStop/group/embed-certs/serial/FirstStart 115.14
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.46
306 TestStartStop/group/no-preload/serial/DeployApp 9.86
307 TestStartStop/group/embed-certs/serial/DeployApp 10.92
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.64
309 TestStartStop/group/no-preload/serial/Stop 12.6
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.29
311 TestStartStop/group/embed-certs/serial/Stop 12.5
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.86
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.88
314 TestStartStop/group/no-preload/serial/SecondStart 292.28
315 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.88
317 TestStartStop/group/embed-certs/serial/SecondStart 315.71
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.55
319 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.74
320 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
321 TestStartStop/group/old-k8s-version/serial/Stop 13.12
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.96
323 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 306.35
324 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.99
325 TestStartStop/group/old-k8s-version/serial/SecondStart 344.33
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.45
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.73
329 TestStartStop/group/no-preload/serial/Pause 7.48
331 TestStartStop/group/newest-cni/serial/FirstStart 86.41
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.45
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.73
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/embed-certs/serial/Pause 8.45
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 7.68
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.88
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 9.37
340 TestNetworkPlugins/group/auto/Start 108.41
341 TestNetworkPlugins/group/kindnet/Start 118.3
342 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
343 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.79
344 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.76
345 TestStartStop/group/old-k8s-version/serial/Pause 9.8
346 TestStartStop/group/newest-cni/serial/DeployApp 0
347 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.52
348 TestNetworkPlugins/group/calico/Start 177.79
349 TestStartStop/group/newest-cni/serial/Stop 15.48
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.01
351 TestStartStop/group/newest-cni/serial/SecondStart 35.64
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.96
355 TestStartStop/group/newest-cni/serial/Pause 10.72
356 TestNetworkPlugins/group/auto/KubeletFlags 0.95
357 TestNetworkPlugins/group/auto/NetCatPod 21.83
358 TestNetworkPlugins/group/custom-flannel/Start 120.32
359 TestNetworkPlugins/group/auto/DNS 0.51
360 TestNetworkPlugins/group/auto/Localhost 0.66
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
362 TestNetworkPlugins/group/auto/HairPin 0.45
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.85
364 TestNetworkPlugins/group/kindnet/NetCatPod 28.79
365 TestNetworkPlugins/group/kindnet/DNS 0.39
366 TestNetworkPlugins/group/kindnet/Localhost 0.41
367 TestNetworkPlugins/group/kindnet/HairPin 0.39
368 TestNetworkPlugins/group/false/Start 115.62
369 TestNetworkPlugins/group/flannel/Start 130.56
370 TestNetworkPlugins/group/calico/ControllerPod 6.02
371 TestNetworkPlugins/group/calico/KubeletFlags 1.1
372 TestNetworkPlugins/group/calico/NetCatPod 24.98
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.01
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 21.79
375 TestNetworkPlugins/group/calico/DNS 0.53
376 TestNetworkPlugins/group/calico/Localhost 0.43
377 TestNetworkPlugins/group/calico/HairPin 0.4
378 TestNetworkPlugins/group/custom-flannel/DNS 0.74
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.48
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.49
381 TestNetworkPlugins/group/false/KubeletFlags 1.27
382 TestNetworkPlugins/group/false/NetCatPod 23.96
383 TestNetworkPlugins/group/false/DNS 0.57
384 TestNetworkPlugins/group/false/Localhost 0.53
385 TestNetworkPlugins/group/false/HairPin 0.93
386 TestNetworkPlugins/group/bridge/Start 140.6
387 TestNetworkPlugins/group/enable-default-cni/Start 134.77
388 TestNetworkPlugins/group/flannel/ControllerPod 6.02
389 TestNetworkPlugins/group/flannel/KubeletFlags 1.06
390 TestNetworkPlugins/group/flannel/NetCatPod 34.99
391 TestNetworkPlugins/group/kubenet/Start 125.56
392 TestNetworkPlugins/group/flannel/DNS 0.49
393 TestNetworkPlugins/group/flannel/Localhost 0.41
394 TestNetworkPlugins/group/flannel/HairPin 0.4
395 TestNetworkPlugins/group/bridge/KubeletFlags 0.94
396 TestNetworkPlugins/group/bridge/NetCatPod 21.75
397 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.36
398 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.11
399 TestNetworkPlugins/group/bridge/DNS 0.37
400 TestNetworkPlugins/group/bridge/Localhost 0.35
401 TestNetworkPlugins/group/bridge/HairPin 0.31
402 TestNetworkPlugins/group/enable-default-cni/DNS 0.34
403 TestNetworkPlugins/group/enable-default-cni/Localhost 0.31
404 TestNetworkPlugins/group/enable-default-cni/HairPin 0.31
405 TestNetworkPlugins/group/kubenet/KubeletFlags 1.12
406 TestNetworkPlugins/group/kubenet/NetCatPod 20.92
407 TestNetworkPlugins/group/kubenet/DNS 0.42
408 TestNetworkPlugins/group/kubenet/Localhost 0.37
409 TestNetworkPlugins/group/kubenet/HairPin 0.34
x
+
TestDownloadOnly/v1.20.0/json-events (11.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-073100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-073100 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (11.3028166s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-073100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-073100: exit status 85 (349.3536ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-073100 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |          |
	|         | -p download-only-073100        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:45
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:45.329864    6988 out.go:345] Setting OutFile to fd 736 ...
	I0917 16:55:45.413625    6988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:45.413625    6988 out.go:358] Setting ErrFile to fd 740...
	I0917 16:55:45.413625    6988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0917 16:55:45.430189    6988 root.go:314] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0917 16:55:45.443666    6988 out.go:352] Setting JSON to true
	I0917 16:55:45.447893    6988 start.go:129] hostinfo: {"hostname":"minikube2","uptime":6873,"bootTime":1726585272,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 16:55:45.447955    6988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 16:55:45.468419    6988 out.go:97] [download-only-073100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 16:55:45.469096    6988 notify.go:220] Checking for updates...
	W0917 16:55:45.469096    6988 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0917 16:55:45.474598    6988 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 16:55:45.487388    6988 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 16:55:45.497299    6988 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:45.509796    6988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0917 16:55:45.523748    6988 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:45.524802    6988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:45.706968    6988 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 16:55:45.715445    6988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:46.085880    6988 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:55:46.050738489 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:55:46.093442    6988 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:46.094732    6988 start.go:297] selected driver: docker
	I0917 16:55:46.094732    6988 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:46.110166    6988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:46.446425    6988 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:55:46.420172349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:55:46.447059    6988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:55:46.517400    6988 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0917 16:55:46.518446    6988 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:55:46.525410    6988 out.go:169] Using Docker Desktop driver with root privileges
	I0917 16:55:46.531542    6988 cni.go:84] Creating CNI manager for ""
	I0917 16:55:46.531542    6988 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0917 16:55:46.532836    6988 start.go:340] cluster config:
	{Name:download-only-073100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-073100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:55:46.538273    6988 out.go:97] Starting "download-only-073100" primary control-plane node in "download-only-073100" cluster
	I0917 16:55:46.538273    6988 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:55:46.547205    6988 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:55:46.547205    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:46.547205    6988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:55:46.588210    6988 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 16:55:46.590368    6988 cache.go:56] Caching tarball of preloaded images
	I0917 16:55:46.590368    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:46.596785    6988 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 16:55:46.596785    6988 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:46.634098    6988 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:46.634098    6988 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:55:46.634098    6988 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:55:46.634098    6988 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:55:46.636178    6988 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:55:46.668318    6988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0917 16:55:51.508757    6988 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:51.509832    6988 preload.go:254] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:55:52.646320    6988 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0917 16:55:52.646320    6988 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-073100\config.json ...
	I0917 16:55:52.646320    6988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\download-only-073100\config.json: {Name:mkab53e92206b55007451982540b64d3e0a6e45c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 16:55:52.648181    6988 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0917 16:55:52.648996    6988 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-073100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-073100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3456506s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-073100
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (8.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-831300 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-831300 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker: (8.2825187s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (8.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-831300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-831300: exit status 85 (277.7245ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-073100 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-073100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| delete  | -p download-only-073100        | download-only-073100 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC | 17 Sep 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-831300 | minikube2\jenkins | v1.34.0 | 17 Sep 24 16:55 UTC |                     |
	|         | -p download-only-831300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 16:55:59
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 16:55:59.294016    8716 out.go:345] Setting OutFile to fd 800 ...
	I0917 16:55:59.367846    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:59.367846    8716 out.go:358] Setting ErrFile to fd 824...
	I0917 16:55:59.367939    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 16:55:59.388714    8716 out.go:352] Setting JSON to true
	I0917 16:55:59.391811    8716 start.go:129] hostinfo: {"hostname":"minikube2","uptime":6887,"bootTime":1726585272,"procs":182,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 16:55:59.392002    8716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 16:55:59.403818    8716 out.go:97] [download-only-831300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 16:55:59.404320    8716 notify.go:220] Checking for updates...
	I0917 16:55:59.407698    8716 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 16:55:59.411737    8716 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 16:55:59.418934    8716 out.go:169] MINIKUBE_LOCATION=19662
	I0917 16:55:59.423028    8716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0917 16:55:59.429835    8716 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 16:55:59.431050    8716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 16:55:59.611951    8716 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 16:55:59.619874    8716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:55:59.933822    8716 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:55:59.90278468 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:55:59.940587    8716 out.go:97] Using the docker driver based on user configuration
	I0917 16:55:59.940587    8716 start.go:297] selected driver: docker
	I0917 16:55:59.940587    8716 start.go:901] validating driver "docker" against <nil>
	I0917 16:55:59.954384    8716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 16:56:00.310433    8716 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-17 16:56:00.26260115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 16:56:00.311249    8716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 16:56:00.359727    8716 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0917 16:56:00.361166    8716 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 16:56:00.683168    8716 out.go:169] Using Docker Desktop driver with root privileges
	I0917 16:56:00.689472    8716 cni.go:84] Creating CNI manager for ""
	I0917 16:56:00.689579    8716 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0917 16:56:00.689579    8716 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0917 16:56:00.689788    8716 start.go:340] cluster config:
	{Name:download-only-831300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-831300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 16:56:00.693399    8716 out.go:97] Starting "download-only-831300" primary control-plane node in "download-only-831300" cluster
	I0917 16:56:00.693480    8716 cache.go:121] Beginning downloading kic base image for docker with docker
	I0917 16:56:00.696852    8716 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 16:56:00.696852    8716 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:00.696852    8716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 16:56:00.744644    8716 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0917 16:56:00.744644    8716 cache.go:56] Caching tarball of preloaded images
	I0917 16:56:00.744644    8716 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0917 16:56:00.751724    8716 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 16:56:00.751724    8716 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0917 16:56:00.775250    8716 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 16:56:00.775250    8716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:56:00.775250    8716 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726589491-19662@sha256_6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4.tar
	I0917 16:56:00.776257    8716 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 16:56:00.776257    8716 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 16:56:00.776257    8716 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 16:56:00.776257    8716 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 16:56:00.826658    8716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-831300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-831300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2328278s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-831300
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.69s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.35s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-908600 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-908600 --alsologtostderr --driver=docker: (1.6318826s)
helpers_test.go:175: Cleaning up "download-docker-908600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-908600
--- PASS: TestDownloadOnlyKic (3.35s)

                                                
                                    
x
+
TestBinaryMirror (2.97s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-946000 --alsologtostderr --binary-mirror http://127.0.0.1:53675 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-946000 --alsologtostderr --binary-mirror http://127.0.0.1:53675 --driver=docker: (1.9133503s)
helpers_test.go:175: Cleaning up "binary-mirror-946000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-946000
--- PASS: TestBinaryMirror (2.97s)

                                                
                                    
x
+
TestOffline (125.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-855600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-855600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (1m59.865557s)
helpers_test.go:175: Cleaning up "offline-docker-855600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-855600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-855600: (5.2966635s)
--- PASS: TestOffline (125.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-000400
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-000400: exit status 85 (300.221ms)

                                                
                                                
-- stdout --
	* Profile "addons-000400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-000400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-000400
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-000400: exit status 85 (301.8977ms)

                                                
                                                
-- stdout --
	* Profile "addons-000400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-000400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (566.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-000400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-000400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (9m26.3678326s)
--- PASS: TestAddons/Setup (566.37s)

                                                
                                    
x
+
TestAddons/serial/Volcano (57.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 71.5778ms
addons_test.go:897: volcano-scheduler stabilized in 72.1344ms
addons_test.go:905: volcano-admission stabilized in 72.1344ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-flh4h" [0ed34d77-836f-46a1-b4aa-44db5e41617f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0077259s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-r8ggm" [c5df8799-65d4-4e4f-ba69-6ed40e774c9c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0087888s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-w6dgt" [1f0c0095-0aad-4739-9546-e84a4c7ee18c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0091208s
addons_test.go:932: (dbg) Run:  kubectl --context addons-000400 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-000400 create -f testdata\vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-000400 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [0185f784-b26c-4673-be09-841a00881061] Pending
helpers_test.go:344: "test-job-nginx-0" [0185f784-b26c-4673-be09-841a00881061] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [0185f784-b26c-4673-be09-841a00881061] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 28.007222s
addons_test.go:968: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable volcano --alsologtostderr -v=1: (11.5800007s)
--- PASS: TestAddons/serial/Volcano (57.79s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-000400 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-000400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (13.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nrm9s" [67b9512f-ecce-4f4f-94fe-c774ad98e86a] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0322595s
addons_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-000400
addons_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-000400: (8.7511688s)
--- PASS: TestAddons/parallel/InspektorGadget (13.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.9949ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qklw7" [1491f166-0f42-45d4-af2d-89a300617312] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007717s
addons_test.go:417: (dbg) Run:  kubectl --context addons-000400 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable metrics-server --alsologtostderr -v=1: (1.4951358s)
--- PASS: TestAddons/parallel/MetricsServer (7.69s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.88s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 6.579ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-m6jxn" [3b64df46-7619-4d49-9a8c-8d2e767f7fad] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0118276s
addons_test.go:475: (dbg) Run:  kubectl --context addons-000400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-000400 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.1401397s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable helm-tiller --alsologtostderr -v=1: (1.6992276s)
--- PASS: TestAddons/parallel/HelmTiller (14.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 13.238ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-000400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-000400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dac06301-5187-4d8d-8bfd-57663847a7c6] Pending
helpers_test.go:344: "task-pv-pod" [dac06301-5187-4d8d-8bfd-57663847a7c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dac06301-5187-4d8d-8bfd-57663847a7c6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.0065583s
addons_test.go:590: (dbg) Run:  kubectl --context addons-000400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-000400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-000400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-000400 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-000400 delete pod task-pv-pod: (2.0785683s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-000400 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-000400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-000400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [12bb76a5-2816-44e4-b54c-35e9c88d6039] Pending
helpers_test.go:344: "task-pv-pod-restore" [12bb76a5-2816-44e4-b54c-35e9c88d6039] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [12bb76a5-2816-44e4-b54c-35e9c88d6039] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0108004s
addons_test.go:632: (dbg) Run:  kubectl --context addons-000400 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-000400 delete pod task-pv-pod-restore: (1.4674347s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-000400 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-000400 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.3422643s)
addons_test.go:648: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable volumesnapshots --alsologtostderr -v=1: (2.2996062s)
--- PASS: TestAddons/parallel/CSI (63.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (31.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-000400 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-000400 --alsologtostderr -v=1: (2.2358837s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-zv58p" [6979ce46-80bd-4614-9137-1f6233d63b04] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-zv58p" [6979ce46-80bd-4614-9137-1f6233d63b04] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-zv58p" [6979ce46-80bd-4614-9137-1f6233d63b04] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 23.0139246s
addons_test.go:839: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable headlamp --alsologtostderr -v=1: (6.7056087s)
--- PASS: TestAddons/parallel/Headlamp (31.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-pxswn" [42b2d04b-916b-46bc-8fc3-37c670ec9df8] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0120745s
addons_test.go:870: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-000400
addons_test.go:870: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-000400: (1.7268478s)
--- PASS: TestAddons/parallel/CloudSpanner (6.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (64.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-000400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-000400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c294f1a0-699b-42c9-b5ab-e0953ac94432] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c294f1a0-699b-42c9-b5ab-e0953ac94432] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c294f1a0-699b-42c9-b5ab-e0953ac94432] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.0076218s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-000400 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 ssh "cat /opt/local-path-provisioner/pvc-cc8235e9-11e6-4250-b675-625b2079ef9c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-000400 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-000400 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (46.9373751s)
--- PASS: TestAddons/parallel/LocalPath (64.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.16s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2ftgf" [4de3ed84-6797-4a61-9b52-ef4d7b038511] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0101156s
addons_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-000400
addons_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-000400: (1.1515097s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.16s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gsgkz" [48204825-30d3-43e5-afd9-4a91c6cec06d] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0082492s
addons_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-windows-amd64.exe -p addons-000400 addons disable yakd --alsologtostderr -v=1: (7.5534931s)
--- PASS: TestAddons/parallel/Yakd (13.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.91s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-000400
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-000400: (12.7439774s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-000400
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-000400
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-000400
--- PASS: TestAddons/StoppedEnableDisable (13.91s)

                                                
                                    
x
+
TestCertOptions (103.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-254700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-254700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m35.8514836s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-254700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-254700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.0981147s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-254700 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-254700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-254700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-254700: (5.0128514s)
--- PASS: TestCertOptions (103.05s)

                                                
                                    
x
+
TestCertExpiration (303.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-118300 --memory=2048 --cert-expiration=3m --driver=docker
E0917 18:20:44.550736    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-118300 --memory=2048 --cert-expiration=3m --driver=docker: (1m11.2903202s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-118300 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-118300 --memory=2048 --cert-expiration=8760h --driver=docker: (45.5100851s)
helpers_test.go:175: Cleaning up "cert-expiration-118300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-118300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-118300: (6.3890737s)
--- PASS: TestCertExpiration (303.19s)

                                                
                                    
x
+
TestDockerFlags (72.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-917000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-917000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m5.4421703s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-917000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-917000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
E0917 18:22:01.935963    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:175: Cleaning up "docker-flags-917000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-917000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-917000: (4.8822975s)
--- PASS: TestDockerFlags (72.10s)

                                                
                                    
x
+
TestForceSystemdFlag (111.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-447200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-447200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m43.852706s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-447200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-447200 ssh "docker info --format {{.CgroupDriver}}": (1.7599147s)
helpers_test.go:175: Cleaning up "force-systemd-flag-447200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-447200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-447200: (6.0809611s)
--- PASS: TestForceSystemdFlag (111.69s)

                                                
                                    
x
+
TestForceSystemdEnv (78.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-104300 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-104300 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m12.2109086s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-104300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-104300 ssh "docker info --format {{.CgroupDriver}}": (1.1456973s)
helpers_test.go:175: Cleaning up "force-systemd-env-104300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-104300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-104300: (4.843777s)
--- PASS: TestForceSystemdEnv (78.20s)

                                                
                                    
x
+
TestErrorSpam/start (4.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run: (1.2982277s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run: (1.3323249s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 start --dry-run: (1.3812969s)
--- PASS: TestErrorSpam/start (4.02s)

                                                
                                    
x
+
TestErrorSpam/status (2.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 status
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 status
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 status
--- PASS: TestErrorSpam/status (2.81s)

                                                
                                    
x
+
TestErrorSpam/pause (3.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 pause: (1.4147157s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 pause
--- PASS: TestErrorSpam/pause (3.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause: (1.2686342s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause: (1.2823107s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 unpause: (1.0249038s)
--- PASS: TestErrorSpam/unpause (3.58s)

                                                
                                    
x
+
TestErrorSpam/stop (19.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop: (12.1630474s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop: (3.6624829s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-151900 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-151900 stop: (3.6246694s)
--- PASS: TestErrorSpam/stop (19.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\2968\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-388800 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m33.9066438s)
--- PASS: TestFunctional/serial/StartWithProxy (93.92s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-388800 --alsologtostderr -v=8: (36.0729983s)
functional_test.go:663: soft start took 36.0742537s for "functional-388800" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-388800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:3.1: (2.3330676s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:3.3: (2.2002997s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 cache add registry.k8s.io/pause:latest: (2.1723701s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-388800 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local151904903\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-388800 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local151904903\001: (1.6791846s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache add minikube-local-cache-test:functional-388800
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 cache add minikube-local-cache-test:functional-388800: (1.5826994s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache delete minikube-local-cache-test:functional-388800
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-388800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (830.5841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 cache reload: (1.6626977s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 kubectl -- --context functional-388800 get pods
E0917 17:20:44.518344    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:44.529902    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:44.542252    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:44.564389    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:20:44.606700    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (75.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0917 17:20:54.782517    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:21:05.025402    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:21:25.509030    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-388800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m15.9594163s)
functional_test.go:761: restart took 1m15.9595318s for "functional-388800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (75.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-388800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 logs
E0917 17:22:06.471728    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 logs: (2.4508967s)
--- PASS: TestFunctional/serial/LogsCmd (2.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd88391266\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd88391266\001\logs.txt: (2.5382839s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-388800 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-388800
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-388800: exit status 115 (1.1967052s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30621 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_service_6bd82f1fe87f7552f02cc11dc4370801e3dafecc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-388800 delete -f testdata\invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-388800 delete -f testdata\invalidsvc.yaml: (1.2871192s)
--- PASS: TestFunctional/serial/InvalidService (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 config get cpus: exit status 14 (262.9905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 config get cpus: exit status 14 (265.9294ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (972.1168ms)

                                                
                                                
-- stdout --
	* [functional-388800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:22:48.926153    3768 out.go:345] Setting OutFile to fd 1528 ...
	I0917 17:22:49.014160    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:22:49.014160    3768 out.go:358] Setting ErrFile to fd 1052...
	I0917 17:22:49.014160    3768 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:22:49.037158    3768 out.go:352] Setting JSON to false
	I0917 17:22:49.042158    3768 start.go:129] hostinfo: {"hostname":"minikube2","uptime":8496,"bootTime":1726585272,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 17:22:49.042158    3768 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 17:22:49.046158    3768 out.go:177] * [functional-388800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 17:22:49.050170    3768 notify.go:220] Checking for updates...
	I0917 17:22:49.052170    3768 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:22:49.055150    3768 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:22:49.061149    3768 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 17:22:49.064418    3768 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:22:49.068286    3768 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:22:49.071288    3768 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:22:49.073290    3768 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:22:49.278277    3768 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 17:22:49.289277    3768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:22:49.661181    3768 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:89 SystemTime:2024-09-17 17:22:49.63261956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 17:22:49.666198    3768 out.go:177] * Using the docker driver based on existing profile
	I0917 17:22:49.670243    3768 start.go:297] selected driver: docker
	I0917 17:22:49.670243    3768 start.go:901] validating driver "docker" against &{Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:22:49.670243    3768 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:22:49.729041    3768 out.go:201] 
	W0917 17:22:49.732027    3768 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:22:49.736033    3768 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --alsologtostderr -v=1 --driver=docker: (1.5588977s)
--- PASS: TestFunctional/parallel/DryRun (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-388800 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0073162s)

                                                
                                                
-- stdout --
	* [functional-388800] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:22:35.873976    9188 out.go:345] Setting OutFile to fd 1008 ...
	I0917 17:22:35.948196    9188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:22:35.948196    9188 out.go:358] Setting ErrFile to fd 1064...
	I0917 17:22:35.948196    9188 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:22:35.975936    9188 out.go:352] Setting JSON to false
	I0917 17:22:35.978453    9188 start.go:129] hostinfo: {"hostname":"minikube2","uptime":8483,"bootTime":1726585272,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0917 17:22:35.978453    9188 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0917 17:22:35.986947    9188 out.go:177] * [functional-388800] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0917 17:22:35.989962    9188 notify.go:220] Checking for updates...
	I0917 17:22:35.994105    9188 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0917 17:22:35.996254    9188 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:22:35.999257    9188 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0917 17:22:36.002511    9188 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:22:36.006151    9188 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:22:36.012143    9188 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:22:36.012900    9188 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:22:36.209894    9188 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0917 17:22:36.217808    9188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:22:36.569994    9188 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:82 SystemTime:2024-09-17 17:22:36.539718254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657532416 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0917 17:22:36.573989    9188 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 17:22:36.578014    9188 start.go:297] selected driver: docker
	I0917 17:22:36.578014    9188 start.go:901] validating driver "docker" against &{Name:functional-388800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-388800 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:22:36.578014    9188 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:22:36.698932    9188 out.go:201] 
	W0917 17:22:36.701740    9188 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:22:36.705117    9188 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 status
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 status: (1.2274436s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (114.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5bef4a9d-5cf0-4ce9-834f-e7696b69f361] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0077942s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-388800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-388800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-388800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-388800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a296f3c-6c98-432c-a0bc-22c19377bd87] Pending
helpers_test.go:344: "sp-pod" [3a296f3c-6c98-432c-a0bc-22c19377bd87] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a296f3c-6c98-432c-a0bc-22c19377bd87] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 44.0092002s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-388800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-388800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-388800 delete -f testdata/storage-provisioner/pod.yaml: (1.467025s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-388800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6e5e8794-867c-43a1-bece-ab315e293357] Pending
helpers_test.go:344: "sp-pod" [6e5e8794-867c-43a1-bece-ab315e293357] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6e5e8794-867c-43a1-bece-ab315e293357] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 1m1.0099578s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-388800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (114.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (5.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh -n functional-388800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cp functional-388800:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd2129903981\001\cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh -n functional-388800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh -n functional-388800 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (5.22s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (75.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-388800 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8m98z" [aaded942-a6bc-4ddb-a063-1da6ca88e935] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8m98z" [aaded942-a6bc-4ddb-a063-1da6ca88e935] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m2.0089342s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;": exit status 1 (297.8045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;": exit status 1 (324.7545ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;": exit status 1 (338.055ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;": exit status 1 (342.7135ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;": exit status 1 (324.4315ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-388800 exec mysql-6cdb49bbb-8m98z -- mysql -ppassword -e "show databases;"
E0917 17:25:44.520999    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:26:12.238210    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (75.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/2968/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /etc/test/nested/copy/2968/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/2968.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /etc/ssl/certs/2968.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/2968.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /usr/share/ca-certificates/2968.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/29682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /etc/ssl/certs/29682.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/29682.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /usr/share/ca-certificates/29682.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-388800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 ssh "sudo systemctl is-active crio": exit status 1 (941.4992ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.8430407s)
--- PASS: TestFunctional/parallel/License (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.0437868s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.0038594s)
functional_test.go:1315: Took "1.0038594s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "279.7418ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4336: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 7852: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-388800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [97e52422-3f39-4098-9b3e-2983e9709010] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [97e52422-3f39-4098-9b3e-2983e9709010] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0083224s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.0246183s)
functional_test.go:1366: Took "1.0246183s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "287.5612ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (25.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-388800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-388800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2jdps" [2549beae-1a99-4dbf-b2b6-581fcea8ad80] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2jdps" [2549beae-1a99-4dbf-b2b6-581fcea8ad80] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 25.0083491s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (25.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-388800 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-388800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1664: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 10512: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-388800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-388800"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-388800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-388800": (4.7786757s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-388800 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-388800 docker-env | Invoke-Expression ; docker images": (3.7877058s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 service list: (1.1590137s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 version -o=json --components: (1.7346358s)
--- PASS: TestFunctional/parallel/Version/components (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 service list -o json: (1.1506412s)
functional_test.go:1494: Took "1.1506412s" to run "out/minikube-windows-amd64.exe -p functional-388800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-388800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-388800
docker.io/kicbase/echo-server:functional-388800
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-388800 image ls --format short --alsologtostderr:
I0917 17:23:41.908876    8972 out.go:345] Setting OutFile to fd 1416 ...
I0917 17:23:42.000749    8972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:42.000749    8972 out.go:358] Setting ErrFile to fd 1320...
I0917 17:23:42.000749    8972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:42.020267    8972 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:42.021006    8972 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:42.040397    8972 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
I0917 17:23:42.139477    8972 ssh_runner.go:195] Run: systemctl --version
I0917 17:23:42.146584    8972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
I0917 17:23:42.232204    8972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
I0917 17:23:42.385874    8972 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-388800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-388800 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-388800 | d5da29f509e3a | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-388800 | 4d4ee982e1bf7 | 1.24MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-388800 image ls --format table --alsologtostderr:
I0917 17:23:54.352441    9260 out.go:345] Setting OutFile to fd 1764 ...
I0917 17:23:54.432795    9260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:54.432795    9260 out.go:358] Setting ErrFile to fd 1720...
I0917 17:23:54.432795    9260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:54.453395    9260 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:54.454335    9260 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:54.473945    9260 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
I0917 17:23:54.560174    9260 ssh_runner.go:195] Run: systemctl --version
I0917 17:23:54.568161    9260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
I0917 17:23:54.650835    9260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
I0917 17:23:54.792259    9260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-388800 image ls --format json --alsologtostderr:
[{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"d5da29f509e3a260ee829c704f62493b514b1c241c314d1172af94689ca6f88d","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-388800"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4a
c5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"4d4ee982e1bf74de8086545acdf1a487b84d6c0fe9a6dfa992897ea33d5add0a","repoDigests":[],"repoTags":["localhost/my-image:functional-388800"],"size":"1240000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-contr
oller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-388800"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-388800 image ls --format json --alsologtostderr:
I0917 17:23:53.686840    5288 out.go:345] Setting OutFile to fd 1696 ...
I0917 17:23:53.781702    5288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:53.781702    5288 out.go:358] Setting ErrFile to fd 1204...
I0917 17:23:53.781820    5288 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:53.799324    5288 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:53.800602    5288 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:53.816859    5288 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
I0917 17:23:53.912853    5288 ssh_runner.go:195] Run: systemctl --version
I0917 17:23:53.919166    5288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
I0917 17:23:53.996195    5288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
I0917 17:23:54.150790    5288 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-388800 image ls --format yaml --alsologtostderr:
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-388800
size: "4940000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: d5da29f509e3a260ee829c704f62493b514b1c241c314d1172af94689ca6f88d
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-388800
size: "30"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-388800 image ls --format yaml --alsologtostderr:
I0917 17:23:42.613729    7260 out.go:345] Setting OutFile to fd 1712 ...
I0917 17:23:42.703920    7260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:42.703920    7260 out.go:358] Setting ErrFile to fd 1708...
I0917 17:23:42.703920    7260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:42.719915    7260 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:42.719915    7260 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:42.738924    7260 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
I0917 17:23:42.826456    7260 ssh_runner.go:195] Run: systemctl --version
I0917 17:23:42.834443    7260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
I0917 17:23:42.909762    7260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
I0917 17:23:43.061518    7260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 ssh pgrep buildkitd: exit status 1 (800.2819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image build -t localhost/my-image:functional-388800 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image build -t localhost/my-image:functional-388800 testdata\build --alsologtostderr: (8.9279414s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-388800 image build -t localhost/my-image:functional-388800 testdata\build --alsologtostderr:
I0917 17:23:44.103969    5792 out.go:345] Setting OutFile to fd 1220 ...
I0917 17:23:44.373970    5792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:44.373970    5792 out.go:358] Setting ErrFile to fd 1232...
I0917 17:23:44.374116    5792 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:23:44.390953    5792 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:44.409050    5792 config.go:182] Loaded profile config "functional-388800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0917 17:23:44.424273    5792 cli_runner.go:164] Run: docker container inspect functional-388800 --format={{.State.Status}}
I0917 17:23:44.522872    5792 ssh_runner.go:195] Run: systemctl --version
I0917 17:23:44.531245    5792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-388800
I0917 17:23:44.613046    5792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54903 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-388800\id_rsa Username:docker}
I0917 17:23:44.732493    5792 build_images.go:161] Building image from path: C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1692354009.tar
I0917 17:23:44.744512    5792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 17:23:44.780839    5792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1692354009.tar
I0917 17:23:44.789823    5792 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1692354009.tar: stat -c "%s %y" /var/lib/minikube/build/build.1692354009.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1692354009.tar': No such file or directory
I0917 17:23:44.790803    5792 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1692354009.tar --> /var/lib/minikube/build/build.1692354009.tar (3072 bytes)
I0917 17:23:44.860409    5792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1692354009
I0917 17:23:44.900391    5792 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1692354009 -xf /var/lib/minikube/build/build.1692354009.tar
I0917 17:23:44.942554    5792 docker.go:360] Building image: /var/lib/minikube/build/build.1692354009
I0917 17:23:44.954378    5792 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-388800 /var/lib/minikube/build/build.1692354009
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 4.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:4d4ee982e1bf74de8086545acdf1a487b84d6c0fe9a6dfa992897ea33d5add0a
#8 writing image sha256:4d4ee982e1bf74de8086545acdf1a487b84d6c0fe9a6dfa992897ea33d5add0a 0.0s done
#8 naming to localhost/my-image:functional-388800 0.0s done
#8 DONE 0.2s
I0917 17:23:52.803548    5792 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-388800 /var/lib/minikube/build/build.1692354009: (7.849s)
I0917 17:23:52.817441    5792 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1692354009
I0917 17:23:52.856074    5792 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1692354009.tar
I0917 17:23:52.878909    5792 build_images.go:217] Built localhost/my-image:functional-388800 from C:\Users\jenkins.minikube2\AppData\Local\Temp\build.1692354009.tar
I0917 17:23:52.878965    5792 build_images.go:133] succeeded building to: functional-388800
I0917 17:23:52.879180    5792 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.897126s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-388800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 service --namespace=default --https --url hello-node: exit status 1 (15.0117312s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55291

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:55291
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr: (2.3851555s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr: (1.4971837s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-388800
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image load --daemon kicbase/echo-server:functional-388800 --alsologtostderr: (1.5222216s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image save kicbase/echo-server:functional-388800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image save kicbase/echo-server:functional-388800 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.056958s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image rm kicbase/echo-server:functional-388800 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.3568802s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-388800
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 image save --daemon kicbase/echo-server:functional-388800 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-388800 image save --daemon kicbase/echo-server:functional-388800 --alsologtostderr: (1.0430601s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-388800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 service hello-node --url --format={{.IP}}: exit status 1 (15.0131823s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-388800 service hello-node --url
E0917 17:23:28.394736    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-388800 service hello-node --url: exit status 1 (15.0161159s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55371

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:55371
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-388800
--- PASS: TestFunctional/delete_echo-server_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-388800
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-388800
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-920700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0917 17:30:44.523901    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-920700 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m27.3599523s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.3008024s)
--- PASS: TestMultiControlPlane/serial/StartCluster (209.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (16.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-920700 -- rollout status deployment/busybox: (5.8496127s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- nslookup kubernetes.io: (1.7916851s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- nslookup kubernetes.io: (1.5979942s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- nslookup kubernetes.io: (1.5792818s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (16.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-28hqv -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-9ccbl -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-920700 -- exec busybox-7dff88458-ctb7s -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-920700 -v=7 --alsologtostderr
E0917 17:32:18.827770    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:18.835741    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:18.847544    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:18.869568    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:18.912164    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:18.994351    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:19.156788    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:19.479321    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:20.121630    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:21.403846    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:23.966157    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:29.088341    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:32:39.331453    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-920700 -v=7 --alsologtostderr: (52.5293319s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.9539955s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-920700 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.3826431s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (49.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status --output json -v=7 --alsologtostderr: (2.8773625s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp testdata\cp-test.txt ha-920700:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile505146235\001\cp-test_ha-920700.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700_ha-920700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700_ha-920700-m02.txt: (1.1918722s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test_ha-920700_ha-920700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700_ha-920700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700_ha-920700-m03.txt: (1.139574s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test_ha-920700_ha-920700-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700_ha-920700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700_ha-920700-m04.txt: (1.13929s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test_ha-920700_ha-920700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp testdata\cp-test.txt ha-920700-m02:/home/docker/cp-test.txt
E0917 17:32:59.813675    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile505146235\001\cp-test_ha-920700-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m02_ha-920700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m02_ha-920700.txt: (1.2266717s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test_ha-920700-m02_ha-920700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700-m02_ha-920700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700-m02_ha-920700-m03.txt: (1.2164264s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test_ha-920700-m02_ha-920700-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700-m02_ha-920700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m02:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700-m02_ha-920700-m04.txt: (1.1642139s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test_ha-920700-m02_ha-920700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp testdata\cp-test.txt ha-920700-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile505146235\001\cp-test_ha-920700-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m03_ha-920700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m03_ha-920700.txt: (1.2343887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test_ha-920700-m03_ha-920700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700-m03_ha-920700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700-m03_ha-920700-m02.txt: (1.1864753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test_ha-920700-m03_ha-920700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700-m03_ha-920700-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m03:/home/docker/cp-test.txt ha-920700-m04:/home/docker/cp-test_ha-920700-m03_ha-920700-m04.txt: (1.1671419s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test_ha-920700-m03_ha-920700-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp testdata\cp-test.txt ha-920700-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile505146235\001\cp-test_ha-920700-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m04_ha-920700.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700:/home/docker/cp-test_ha-920700-m04_ha-920700.txt: (1.1902404s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700 "sudo cat /home/docker/cp-test_ha-920700-m04_ha-920700.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700-m04_ha-920700-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700-m02:/home/docker/cp-test_ha-920700-m04_ha-920700-m02.txt: (1.2817292s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m02 "sudo cat /home/docker/cp-test_ha-920700-m04_ha-920700-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700-m04_ha-920700-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 cp ha-920700-m04:/home/docker/cp-test.txt ha-920700-m03:/home/docker/cp-test_ha-920700-m04_ha-920700-m03.txt: (1.1833515s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 ssh -n ha-920700-m03 "sudo cat /home/docker/cp-test_ha-920700-m04_ha-920700-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (49.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 node stop m02 -v=7 --alsologtostderr
E0917 17:33:40.776086    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 node stop m02 -v=7 --alsologtostderr: (11.8871354s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: exit status 7 (2.3089273s)

                                                
                                                
-- stdout --
	ha-920700
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920700-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-920700-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:33:46.946276    2176 out.go:345] Setting OutFile to fd 1540 ...
	I0917 17:33:47.024863    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:33:47.024971    2176 out.go:358] Setting ErrFile to fd 1676...
	I0917 17:33:47.024971    2176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:33:47.048890    2176 out.go:352] Setting JSON to false
	I0917 17:33:47.048988    2176 mustload.go:65] Loading cluster: ha-920700
	I0917 17:33:47.049106    2176 notify.go:220] Checking for updates...
	I0917 17:33:47.049905    2176 config.go:182] Loaded profile config "ha-920700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:33:47.049991    2176 status.go:255] checking status of ha-920700 ...
	I0917 17:33:47.066994    2176 cli_runner.go:164] Run: docker container inspect ha-920700 --format={{.State.Status}}
	I0917 17:33:47.160758    2176 status.go:330] ha-920700 host status = "Running" (err=<nil>)
	I0917 17:33:47.160758    2176 host.go:66] Checking if "ha-920700" exists ...
	I0917 17:33:47.172644    2176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-920700
	I0917 17:33:47.255220    2176 host.go:66] Checking if "ha-920700" exists ...
	I0917 17:33:47.269162    2176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:47.276202    2176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-920700
	I0917 17:33:47.350552    2176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55519 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-920700\id_rsa Username:docker}
	I0917 17:33:47.507376    2176 ssh_runner.go:195] Run: systemctl --version
	I0917 17:33:47.539275    2176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:33:47.573325    2176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-920700
	I0917 17:33:47.652867    2176 kubeconfig.go:125] found "ha-920700" server: "https://127.0.0.1:55523"
	I0917 17:33:47.652867    2176 api_server.go:166] Checking apiserver status ...
	I0917 17:33:47.667291    2176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:33:47.713049    2176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2439/cgroup
	I0917 17:33:47.737446    2176 api_server.go:182] apiserver freezer: "7:freezer:/docker/c054a2748ee5c154eb0782a8ebb4eb4cc81d340a9c9b3605a35dd3be7f19693a/kubepods/burstable/pod983d24f0106ec36785f75f70bb4abb32/3a97c9389a982c40032892501c862e52de33f378ab7fbc5b96b8ae33192413fa"
	I0917 17:33:47.750475    2176 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c054a2748ee5c154eb0782a8ebb4eb4cc81d340a9c9b3605a35dd3be7f19693a/kubepods/burstable/pod983d24f0106ec36785f75f70bb4abb32/3a97c9389a982c40032892501c862e52de33f378ab7fbc5b96b8ae33192413fa/freezer.state
	I0917 17:33:47.776769    2176 api_server.go:204] freezer state: "THAWED"
	I0917 17:33:47.776856    2176 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55523/healthz ...
	I0917 17:33:47.792050    2176 api_server.go:279] https://127.0.0.1:55523/healthz returned 200:
	ok
	I0917 17:33:47.792050    2176 status.go:422] ha-920700 apiserver status = Running (err=<nil>)
	I0917 17:33:47.792050    2176 status.go:257] ha-920700 status: &{Name:ha-920700 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:33:47.792050    2176 status.go:255] checking status of ha-920700-m02 ...
	I0917 17:33:47.815442    2176 cli_runner.go:164] Run: docker container inspect ha-920700-m02 --format={{.State.Status}}
	I0917 17:33:47.899501    2176 status.go:330] ha-920700-m02 host status = "Stopped" (err=<nil>)
	I0917 17:33:47.899623    2176 status.go:343] host is not running, skipping remaining checks
	I0917 17:33:47.899623    2176 status.go:257] ha-920700-m02 status: &{Name:ha-920700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:33:47.899684    2176 status.go:255] checking status of ha-920700-m03 ...
	I0917 17:33:47.928282    2176 cli_runner.go:164] Run: docker container inspect ha-920700-m03 --format={{.State.Status}}
	I0917 17:33:48.011190    2176 status.go:330] ha-920700-m03 host status = "Running" (err=<nil>)
	I0917 17:33:48.011190    2176 host.go:66] Checking if "ha-920700-m03" exists ...
	I0917 17:33:48.022286    2176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-920700-m03
	I0917 17:33:48.102812    2176 host.go:66] Checking if "ha-920700-m03" exists ...
	I0917 17:33:48.117433    2176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:48.127188    2176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-920700-m03
	I0917 17:33:48.205962    2176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55670 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-920700-m03\id_rsa Username:docker}
	I0917 17:33:48.349335    2176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:33:48.385800    2176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-920700
	I0917 17:33:48.465458    2176 kubeconfig.go:125] found "ha-920700" server: "https://127.0.0.1:55523"
	I0917 17:33:48.465458    2176 api_server.go:166] Checking apiserver status ...
	I0917 17:33:48.481130    2176 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:33:48.521772    2176 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2428/cgroup
	I0917 17:33:48.548067    2176 api_server.go:182] apiserver freezer: "7:freezer:/docker/16041b95650127accf7b3501360ab67cc2b20c8111a9aea7ada7100438f280d2/kubepods/burstable/pod7388f72504f391934915ddab5bc0b216/7487500fb309605d640efb78f80213d3d9b31e74bf6164ab774fe2c78db59101"
	I0917 17:33:48.563155    2176 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/16041b95650127accf7b3501360ab67cc2b20c8111a9aea7ada7100438f280d2/kubepods/burstable/pod7388f72504f391934915ddab5bc0b216/7487500fb309605d640efb78f80213d3d9b31e74bf6164ab774fe2c78db59101/freezer.state
	I0917 17:33:48.598231    2176 api_server.go:204] freezer state: "THAWED"
	I0917 17:33:48.598231    2176 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55523/healthz ...
	I0917 17:33:48.614733    2176 api_server.go:279] https://127.0.0.1:55523/healthz returned 200:
	ok
	I0917 17:33:48.614733    2176 status.go:422] ha-920700-m03 apiserver status = Running (err=<nil>)
	I0917 17:33:48.614733    2176 status.go:257] ha-920700-m03 status: &{Name:ha-920700-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:33:48.614733    2176 status.go:255] checking status of ha-920700-m04 ...
	I0917 17:33:48.632300    2176 cli_runner.go:164] Run: docker container inspect ha-920700-m04 --format={{.State.Status}}
	I0917 17:33:48.715248    2176 status.go:330] ha-920700-m04 host status = "Running" (err=<nil>)
	I0917 17:33:48.715248    2176 host.go:66] Checking if "ha-920700-m04" exists ...
	I0917 17:33:48.725537    2176 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-920700-m04
	I0917 17:33:48.817353    2176 host.go:66] Checking if "ha-920700-m04" exists ...
	I0917 17:33:48.830961    2176 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:33:48.840498    2176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-920700-m04
	I0917 17:33:48.925544    2176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55834 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-920700-m04\id_rsa Username:docker}
	I0917 17:33:49.078845    2176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:33:49.106378    2176 status.go:257] ha-920700-m04 status: &{Name:ha-920700-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.7264269s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (86.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 node start m02 -v=7 --alsologtostderr
E0917 17:35:02.700008    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 node start m02 -v=7 --alsologtostderr: (1m23.1654607s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.7151983s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (86.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1411116s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (279.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-920700 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-920700 -v=7 --alsologtostderr
E0917 17:35:44.526559    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-920700 -v=7 --alsologtostderr: (38.0060749s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-920700 --wait=true -v=7 --alsologtostderr
E0917 17:37:07.606698    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:37:18.831474    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 17:37:46.544011    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-920700 --wait=true -v=7 --alsologtostderr: (4m0.5483687s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-920700
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (279.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 node delete m03 -v=7 --alsologtostderr: (14.5813217s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.1791562s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5520069s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 stop -v=7 --alsologtostderr
E0917 17:40:44.529343    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 stop -v=7 --alsologtostderr: (36.0854584s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: exit status 7 (515.8069ms)

                                                
                                                
-- stdout --
	ha-920700
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920700-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-920700-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:40:53.231212    3540 out.go:345] Setting OutFile to fd 2044 ...
	I0917 17:40:53.321907    3540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:40:53.321907    3540 out.go:358] Setting ErrFile to fd 1492...
	I0917 17:40:53.321987    3540 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:40:53.336161    3540 out.go:352] Setting JSON to false
	I0917 17:40:53.336161    3540 mustload.go:65] Loading cluster: ha-920700
	I0917 17:40:53.336161    3540 notify.go:220] Checking for updates...
	I0917 17:40:53.337105    3540 config.go:182] Loaded profile config "ha-920700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 17:40:53.337105    3540 status.go:255] checking status of ha-920700 ...
	I0917 17:40:53.355416    3540 cli_runner.go:164] Run: docker container inspect ha-920700 --format={{.State.Status}}
	I0917 17:40:53.439862    3540 status.go:330] ha-920700 host status = "Stopped" (err=<nil>)
	I0917 17:40:53.439914    3540 status.go:343] host is not running, skipping remaining checks
	I0917 17:40:53.439914    3540 status.go:257] ha-920700 status: &{Name:ha-920700 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:40:53.439955    3540 status.go:255] checking status of ha-920700-m02 ...
	I0917 17:40:53.455326    3540 cli_runner.go:164] Run: docker container inspect ha-920700-m02 --format={{.State.Status}}
	I0917 17:40:53.529643    3540 status.go:330] ha-920700-m02 host status = "Stopped" (err=<nil>)
	I0917 17:40:53.529822    3540 status.go:343] host is not running, skipping remaining checks
	I0917 17:40:53.529822    3540 status.go:257] ha-920700-m02 status: &{Name:ha-920700-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:40:53.529899    3540 status.go:255] checking status of ha-920700-m04 ...
	I0917 17:40:53.549941    3540 cli_runner.go:164] Run: docker container inspect ha-920700-m04 --format={{.State.Status}}
	I0917 17:40:53.619154    3540 status.go:330] ha-920700-m04 host status = "Stopped" (err=<nil>)
	I0917 17:40:53.619154    3540 status.go:343] host is not running, skipping remaining checks
	I0917 17:40:53.619154    3540 status.go:257] ha-920700-m04 status: &{Name:ha-920700-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-920700 --wait=true -v=7 --alsologtostderr --driver=docker
E0917 17:42:18.833982    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-920700 --wait=true -v=7 --alsologtostderr --driver=docker: (1m37.3173716s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.398771s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6452271s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-920700 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-920700 --control-plane -v=7 --alsologtostderr: (1m17.4869499s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-920700 status -v=7 --alsologtostderr: (2.943102s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.290811s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.29s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (65.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-496000 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-496000 --driver=docker: (1m5.1044959s)
--- PASS: TestImageBuild/serial/Setup (65.10s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-496000
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-496000: (5.7332406s)
--- PASS: TestImageBuild/serial/NormalBuild (5.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-496000
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-496000: (2.7458254s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-496000
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-496000: (1.6264947s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-496000
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-496000: (1.7179993s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-770900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0917 17:45:44.531316    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-770900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m39.6887943s)
--- PASS: TestJSONOutput/start/Command (99.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.41s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-770900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-770900 --output=json --user=testUser: (1.414425s)
--- PASS: TestJSONOutput/pause/Command (1.41s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.26s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-770900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-770900 --output=json --user=testUser: (1.2611196s)
--- PASS: TestJSONOutput/unpause/Command (1.26s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-770900 --output=json --user=testUser
E0917 17:47:18.835423    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-770900 --output=json --user=testUser: (12.5852266s)
--- PASS: TestJSONOutput/stop/Command (12.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.02s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-108900 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-108900 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (302.8203ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e30e4589-e947-4fee-958e-ce67f028d852","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-108900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a3572290-737f-47a3-bf46-9083162e7c23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"f8b33edc-cd56-4197-adae-96202134bce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"466ed1ea-65ed-4321-82bf-c87e44b80838","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"dc0bff35-41e7-4d65-884d-7dc6667d2239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"347abbd6-6f82-41bb-a9d7-955f24947eb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f342cf36-c83c-4ce0-921f-d8b268e6848a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-108900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-108900
--- PASS: TestErrorJSONOutput (1.02s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (73.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-552800 --network=
E0917 17:48:41.912666    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-552800 --network=: (1m8.9443116s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-552800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-552800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-552800: (4.172342s)
--- PASS: TestKicCustomNetwork/create_custom_network (73.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (71.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-276300 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-276300 --network=bridge: (1m7.8982969s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-276300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-276300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-276300: (3.4636923s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (71.45s)

                                                
                                    
x
+
TestKicExistingNetwork (72.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-807200 --network=existing-network
E0917 17:50:44.534135    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-807200 --network=existing-network: (1m8.2350201s)
helpers_test.go:175: Cleaning up "existing-network-807200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-807200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-807200: (3.6703459s)
--- PASS: TestKicExistingNetwork (72.73s)

                                                
                                    
x
+
TestKicCustomSubnet (70.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-612400 --subnet=192.168.60.0/24
E0917 17:52:18.838621    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-612400 --subnet=192.168.60.0/24: (1m6.8207152s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-612400 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-612400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-612400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-612400: (3.9643283s)
--- PASS: TestKicCustomSubnet (70.88s)

                                                
                                    
x
+
TestKicStaticIP (72.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-269000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-269000 --static-ip=192.168.200.200: (1m8.1320013s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-269000 ip
helpers_test.go:175: Cleaning up "static-ip-269000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-269000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-269000: (4.0275923s)
--- PASS: TestKicStaticIP (72.61s)

                                                
                                    
x
+
TestMainNoArgs (0.25s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.25s)

                                                
                                    
x
+
TestMinikubeProfile (140.32s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-423800 --driver=docker
E0917 17:53:47.618615    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-423800 --driver=docker: (1m4.5873909s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-423800 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-423800 --driver=docker: (1m2.6050092s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-423800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0917 17:55:44.537291    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.5930841s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-423800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.7766833s)
helpers_test.go:175: Cleaning up "second-423800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-423800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-423800: (4.0946871s)
helpers_test.go:175: Cleaning up "first-423800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-423800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-423800: (4.9661579s)
--- PASS: TestMinikubeProfile (140.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-893700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-893700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (18.5047248s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.8s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-893700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (17.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-893700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-893700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (16.9707219s)
--- PASS: TestMountStart/serial/StartWithMountSecond (17.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-893700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.76s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.78s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-893700 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-893700 --alsologtostderr -v=5: (2.7802409s)
--- PASS: TestMountStart/serial/DeleteFirst (2.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-893700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.76s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.05s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-893700
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-893700: (2.0463782s)
--- PASS: TestMountStart/serial/Stop (2.05s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (12.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-893700
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-893700: (11.8373595s)
--- PASS: TestMountStart/serial/RestartStopped (12.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-893700 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (150.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0917 17:57:18.841563    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m28.5885416s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: (1.8057788s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (150.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (38.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- rollout status deployment/busybox: (31.3727108s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- nslookup kubernetes.io: (1.7188898s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- nslookup kubernetes.io: (1.5647514s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (38.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-nj2j8 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-427300 -- exec busybox-7dff88458-vtc86 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-427300 -v 3 --alsologtostderr
E0917 18:00:44.539626    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-427300 -v 3 --alsologtostderr: (48.1853524s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: (2.2262588s)
--- PASS: TestMultiNode/serial/AddNode (50.41s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-427300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (27.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status --output json --alsologtostderr: (1.893108s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp testdata\cp-test.txt multinode-427300:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1057504637\001\cp-test_multinode-427300.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300:/home/docker/cp-test.txt multinode-427300-m02:/home/docker/cp-test_multinode-427300_multinode-427300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300:/home/docker/cp-test.txt multinode-427300-m02:/home/docker/cp-test_multinode-427300_multinode-427300-m02.txt: (1.135101s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test_multinode-427300_multinode-427300-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300:/home/docker/cp-test.txt multinode-427300-m03:/home/docker/cp-test_multinode-427300_multinode-427300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300:/home/docker/cp-test.txt multinode-427300-m03:/home/docker/cp-test_multinode-427300_multinode-427300-m03.txt: (1.1794945s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test_multinode-427300_multinode-427300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp testdata\cp-test.txt multinode-427300-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1057504637\001\cp-test_multinode-427300-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m02:/home/docker/cp-test.txt multinode-427300:/home/docker/cp-test_multinode-427300-m02_multinode-427300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m02:/home/docker/cp-test.txt multinode-427300:/home/docker/cp-test_multinode-427300-m02_multinode-427300.txt: (1.1354644s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test_multinode-427300-m02_multinode-427300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m02:/home/docker/cp-test.txt multinode-427300-m03:/home/docker/cp-test_multinode-427300-m02_multinode-427300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m02:/home/docker/cp-test.txt multinode-427300-m03:/home/docker/cp-test_multinode-427300-m02_multinode-427300-m03.txt: (1.1507943s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test_multinode-427300-m02_multinode-427300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp testdata\cp-test.txt multinode-427300-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile1057504637\001\cp-test_multinode-427300-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m03:/home/docker/cp-test.txt multinode-427300:/home/docker/cp-test_multinode-427300-m03_multinode-427300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m03:/home/docker/cp-test.txt multinode-427300:/home/docker/cp-test_multinode-427300-m03_multinode-427300.txt: (1.1695413s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300 "sudo cat /home/docker/cp-test_multinode-427300-m03_multinode-427300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m03:/home/docker/cp-test.txt multinode-427300-m02:/home/docker/cp-test_multinode-427300-m03_multinode-427300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 cp multinode-427300-m03:/home/docker/cp-test.txt multinode-427300-m02:/home/docker/cp-test_multinode-427300-m03_multinode-427300-m02.txt: (1.1333679s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 ssh -n multinode-427300-m02 "sudo cat /home/docker/cp-test_multinode-427300-m03_multinode-427300-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (27.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (5.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 node stop m03: (2.0775092s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-427300 status: exit status 7 (1.5261177s)

                                                
                                                
-- stdout --
	multinode-427300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-427300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-427300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: exit status 7 (1.492983s)

                                                
                                                
-- stdout --
	multinode-427300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-427300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-427300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:01:32.727177   12628 out.go:345] Setting OutFile to fd 1196 ...
	I0917 18:01:32.807708   12628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:01:32.807708   12628 out.go:358] Setting ErrFile to fd 1696...
	I0917 18:01:32.807708   12628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:01:32.825120   12628 out.go:352] Setting JSON to false
	I0917 18:01:32.825120   12628 mustload.go:65] Loading cluster: multinode-427300
	I0917 18:01:32.825120   12628 notify.go:220] Checking for updates...
	I0917 18:01:32.826415   12628 config.go:182] Loaded profile config "multinode-427300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 18:01:32.826415   12628 status.go:255] checking status of multinode-427300 ...
	I0917 18:01:32.844128   12628 cli_runner.go:164] Run: docker container inspect multinode-427300 --format={{.State.Status}}
	I0917 18:01:32.918720   12628 status.go:330] multinode-427300 host status = "Running" (err=<nil>)
	I0917 18:01:32.918720   12628 host.go:66] Checking if "multinode-427300" exists ...
	I0917 18:01:32.926732   12628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-427300
	I0917 18:01:33.006031   12628 host.go:66] Checking if "multinode-427300" exists ...
	I0917 18:01:33.018794   12628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 18:01:33.025793   12628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-427300
	I0917 18:01:33.111997   12628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57483 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-427300\id_rsa Username:docker}
	I0917 18:01:33.259932   12628 ssh_runner.go:195] Run: systemctl --version
	I0917 18:01:33.290483   12628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:01:33.327223   12628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-427300
	I0917 18:01:33.411028   12628 kubeconfig.go:125] found "multinode-427300" server: "https://127.0.0.1:57482"
	I0917 18:01:33.411200   12628 api_server.go:166] Checking apiserver status ...
	I0917 18:01:33.424362   12628 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:01:33.466040   12628 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2486/cgroup
	I0917 18:01:33.487963   12628 api_server.go:182] apiserver freezer: "7:freezer:/docker/3e2ce368f0b34f1ae0b08c65de365c1e25547c950413c7ae4e73b7408c3f5039/kubepods/burstable/pod0ec7674ca77a4ef91e0dcd916638663d/063a2981b062981b3ed82de535f0dd800579cc8a1eeb511de71d8759fdadcbd3"
	I0917 18:01:33.500813   12628 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e2ce368f0b34f1ae0b08c65de365c1e25547c950413c7ae4e73b7408c3f5039/kubepods/burstable/pod0ec7674ca77a4ef91e0dcd916638663d/063a2981b062981b3ed82de535f0dd800579cc8a1eeb511de71d8759fdadcbd3/freezer.state
	I0917 18:01:33.524405   12628 api_server.go:204] freezer state: "THAWED"
	I0917 18:01:33.524473   12628 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57482/healthz ...
	I0917 18:01:33.536493   12628 api_server.go:279] https://127.0.0.1:57482/healthz returned 200:
	ok
	I0917 18:01:33.536493   12628 status.go:422] multinode-427300 apiserver status = Running (err=<nil>)
	I0917 18:01:33.536493   12628 status.go:257] multinode-427300 status: &{Name:multinode-427300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:01:33.536493   12628 status.go:255] checking status of multinode-427300-m02 ...
	I0917 18:01:33.552776   12628 cli_runner.go:164] Run: docker container inspect multinode-427300-m02 --format={{.State.Status}}
	I0917 18:01:33.632985   12628 status.go:330] multinode-427300-m02 host status = "Running" (err=<nil>)
	I0917 18:01:33.632985   12628 host.go:66] Checking if "multinode-427300-m02" exists ...
	I0917 18:01:33.641994   12628 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-427300-m02
	I0917 18:01:33.722988   12628 host.go:66] Checking if "multinode-427300-m02" exists ...
	I0917 18:01:33.736029   12628 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 18:01:33.743018   12628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-427300-m02
	I0917 18:01:33.814005   12628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57564 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-427300-m02\id_rsa Username:docker}
	I0917 18:01:33.956453   12628 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:01:33.981455   12628 status.go:257] multinode-427300-m02 status: &{Name:multinode-427300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:01:33.981487   12628 status.go:255] checking status of multinode-427300-m03 ...
	I0917 18:01:33.997395   12628 cli_runner.go:164] Run: docker container inspect multinode-427300-m03 --format={{.State.Status}}
	I0917 18:01:34.076879   12628 status.go:330] multinode-427300-m03 host status = "Stopped" (err=<nil>)
	I0917 18:01:34.076879   12628 status.go:343] host is not running, skipping remaining checks
	I0917 18:01:34.076879   12628 status.go:257] multinode-427300-m03 status: &{Name:multinode-427300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (5.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (18.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 node start m03 -v=7 --alsologtostderr: (16.8767266s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status -v=7 --alsologtostderr: (1.8816306s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (18.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-427300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-427300
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-427300: (25.0461481s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true -v=8 --alsologtostderr
E0917 18:02:18.844408    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true -v=8 --alsologtostderr: (1m38.9098676s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-427300
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 node delete m03: (8.3349448s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: (1.4670414s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 stop: (23.5556624s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-427300 status: exit status 7 (412.8931ms)

                                                
                                                
-- stdout --
	multinode-427300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-427300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: exit status 7 (412.2549ms)

                                                
                                                
-- stdout --
	multinode-427300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-427300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:04:31.842809    6728 out.go:345] Setting OutFile to fd 1936 ...
	I0917 18:04:31.917236    6728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:04:31.917236    6728 out.go:358] Setting ErrFile to fd 1316...
	I0917 18:04:31.917236    6728 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:04:31.930540    6728 out.go:352] Setting JSON to false
	I0917 18:04:31.930540    6728 mustload.go:65] Loading cluster: multinode-427300
	I0917 18:04:31.930725    6728 notify.go:220] Checking for updates...
	I0917 18:04:31.930982    6728 config.go:182] Loaded profile config "multinode-427300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0917 18:04:31.930982    6728 status.go:255] checking status of multinode-427300 ...
	I0917 18:04:31.949137    6728 cli_runner.go:164] Run: docker container inspect multinode-427300 --format={{.State.Status}}
	I0917 18:04:32.026727    6728 status.go:330] multinode-427300 host status = "Stopped" (err=<nil>)
	I0917 18:04:32.026727    6728 status.go:343] host is not running, skipping remaining checks
	I0917 18:04:32.026727    6728 status.go:257] multinode-427300 status: &{Name:multinode-427300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:04:32.026727    6728 status.go:255] checking status of multinode-427300-m02 ...
	I0917 18:04:32.045551    6728 cli_runner.go:164] Run: docker container inspect multinode-427300-m02 --format={{.State.Status}}
	I0917 18:04:32.124129    6728 status.go:330] multinode-427300-m02 host status = "Stopped" (err=<nil>)
	I0917 18:04:32.124129    6728 status.go:343] host is not running, skipping remaining checks
	I0917 18:04:32.124129    6728 status.go:257] multinode-427300-m02 status: &{Name:multinode-427300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (70.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true -v=8 --alsologtostderr --driver=docker
E0917 18:05:21.924370    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-427300 --wait=true -v=8 --alsologtostderr --driver=docker: (1m8.4322485s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-427300 status --alsologtostderr: (1.3695842s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (70.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (67.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-427300
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-427300-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-427300-m02 --driver=docker: exit status 14 (300.8395ms)

                                                
                                                
-- stdout --
	* [multinode-427300-m02] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-427300-m02' is duplicated with machine name 'multinode-427300-m02' in profile 'multinode-427300'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-427300-m03 --driver=docker
E0917 18:05:44.542959    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-427300-m03 --driver=docker: (1m2.2083819s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-427300
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-427300: exit status 80 (860.6772ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-427300 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-427300-m03 already exists in multinode-427300-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_6.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-427300-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-427300-m03: (3.9302638s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (67.53s)

                                                
                                    
x
+
TestPreload (169.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-762300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0917 18:07:18.846892    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-762300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m55.0163194s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-762300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-762300 image pull gcr.io/k8s-minikube/busybox: (2.2855837s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-762300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-762300: (12.2733878s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-762300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-762300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (34.9657335s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-762300 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-762300 image list: (1.0175504s)
helpers_test.go:175: Cleaning up "test-preload-762300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-762300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-762300: (4.1851568s)
--- PASS: TestPreload (169.75s)

                                                
                                    
x
+
TestScheduledStopWindows (134.47s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-299800 --memory=2048 --driver=docker
E0917 18:10:27.630457    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:10:44.546445    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-299800 --memory=2048 --driver=docker: (1m5.7019833s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-299800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-299800 --schedule 5m: (1.6715494s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-299800 -n scheduled-stop-299800
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-299800 -n scheduled-stop-299800: (1.1576211s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-299800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-299800 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-299800 --schedule 5s: (1.3256139s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-299800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-299800: exit status 7 (328.5569ms)

                                                
                                                
-- stdout --
	scheduled-stop-299800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-299800 -n scheduled-stop-299800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-299800 -n scheduled-stop-299800: exit status 7 (319.688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-299800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-299800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-299800: (2.9624066s)
--- PASS: TestScheduledStopWindows (134.47s)

                                                
                                    
x
+
TestInsufficientStorage (43.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-547000 --memory=2048 --output=json --wait=true --driver=docker
E0917 18:12:18.849713    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-547000 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (38.3299932s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5c7a707-1fc0-4ec3-9715-b13a2f9ba930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-547000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3a1fef0-4b4d-40b9-84d0-983d750a4a23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"bc308fcd-7afb-438b-86b4-17fbcd21780a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b85cc5a1-5ee8-447d-9e35-e10809a4c2c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e1e0997b-e454-4f70-b099-5a9c74311447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"d3f41073-e67d-45c8-b8ca-85b0ee0b9f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b073a3d6-d9a5-4048-b709-e436b9fdea36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d076a21a-f6d1-4eae-a194-1ed524867d4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"123e1d9e-0ccf-4f72-804a-3f3103c69aec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9703cadf-3d6a-479a-a03e-54132f5ad719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"16eb9c54-581e-4774-9214-7346f0ec5a41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-547000\" primary control-plane node in \"insufficient-storage-547000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c0927df-2627-4a92-a6fc-de69eec10b54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6168477-ea2c-42ef-a51e-de9c0480aadc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bea0315c-882f-4362-b1f1-5fc3f65e5594","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-547000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-547000 --output=json --layout=cluster: exit status 7 (820.4591ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-547000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-547000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:12:41.317354    2964 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-547000" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-547000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-547000 --output=json --layout=cluster: exit status 7 (805.2429ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-547000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-547000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:12:42.126398   12852 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-547000" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E0917 18:12:42.164096   12852 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-547000\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-547000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-547000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-547000: (3.1196766s)
--- PASS: TestInsufficientStorage (43.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (199.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1361989213.exe start -p running-upgrade-402600 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.1361989213.exe start -p running-upgrade-402600 --memory=2200 --vm-driver=docker: (1m50.9953545s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-402600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-402600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m19.0366073s)
helpers_test.go:175: Cleaning up "running-upgrade-402600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-402600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-402600: (7.6527753s)
--- PASS: TestRunningBinaryUpgrade (199.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (517.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (2m2.9142113s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-849400
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-849400: (4.1344641s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-849400 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-849400 status --format={{.Host}}: exit status 7 (414.9177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (5m27.8429773s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-849400 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (301.6159ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-849400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-849400
	    minikube start -p kubernetes-upgrade-849400 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8494002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-849400 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-849400 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (46.6071762s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-849400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-849400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-849400: (15.5734091s)
--- PASS: TestKubernetesUpgrade (517.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (300.97s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3863701480.exe start -p missing-upgrade-465800 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3863701480.exe start -p missing-upgrade-465800 --memory=2200 --driver=docker: (2m19.1250409s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-465800
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-465800: (10.9799005s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-465800
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-465800 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0917 18:17:18.852688    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-465800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m19.3182011s)
helpers_test.go:175: Cleaning up "missing-upgrade-465800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-465800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-465800: (9.8583272s)
--- PASS: TestMissingContainerUpgrade (300.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (377.1259ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-855600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --driver=docker: (1m43.4924448s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-855600 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-855600 status -o json: (1.463054s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (346.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3198482231.exe start -p stopped-upgrade-387600 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3198482231.exe start -p stopped-upgrade-387600 --memory=2200 --vm-driver=docker: (4m8.1653784s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3198482231.exe -p stopped-upgrade-387600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3198482231.exe -p stopped-upgrade-387600 stop: (12.7784746s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-387600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-387600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m25.4696892s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (346.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --driver=docker: (26.1692436s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-855600 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-855600 status -o json: exit status 2 (1.0336431s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-855600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-855600
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-855600: (5.1771099s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --no-kubernetes --driver=docker: (33.920783s)
--- PASS: TestNoKubernetes/serial/Start (33.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-855600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-855600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (772.3175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.5366271s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.6575713s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-855600
E0917 18:15:44.547746    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-855600: (6.711474s)
--- PASS: TestNoKubernetes/serial/Stop (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-855600 --driver=docker: (26.800542s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-855600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-855600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (805.0197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-387600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-387600: (3.0613962s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.06s)

                                                
                                    
x
+
TestPause/serial/Start (107.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-482600 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-482600 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m47.9312856s)
--- PASS: TestPause/serial/Start (107.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-482600 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-482600 --alsologtostderr -v=1 --driver=docker: (46.4114572s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (46.44s)

                                                
                                    
x
+
TestPause/serial/Pause (1.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-482600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-482600 --alsologtostderr -v=5: (1.5848275s)
--- PASS: TestPause/serial/Pause (1.59s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.94s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-482600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-482600 --output=json --layout=cluster: exit status 2 (941.3121ms)

                                                
                                                
-- stdout --
	{"Name":"pause-482600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-482600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.94s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.38s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-482600 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-482600 --alsologtostderr -v=5: (1.3759672s)
--- PASS: TestPause/serial/Unpause (1.38s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-482600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-482600 --alsologtostderr -v=5: (1.8296253s)
--- PASS: TestPause/serial/PauseAgain (1.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-482600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-482600 --alsologtostderr -v=5: (4.9954222s)
--- PASS: TestPause/serial/DeletePaused (5.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.9908176s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-482600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-482600: exit status 1 (90.2853ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-482600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (240.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-844500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0917 18:22:18.856022    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-844500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (4m0.1067596s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (240.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (131.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-132700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-132700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (2m11.3945709s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (131.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (115.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-486300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-486300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (1m55.1404752s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (115.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-477900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-477900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (1m23.4542299s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-132700 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8da9241b-814a-4fa1-b50f-536bfd9c5f4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8da9241b-814a-4fa1-b50f-536bfd9c5f4e] Running
E0917 18:25:44.553819    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0105318s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-132700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-486300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8ef6100-8d20-4044-982d-5d21cddc8047] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8ef6100-8d20-4044-982d-5d21cddc8047] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0114328s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-486300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-132700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-132700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2852907s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-132700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-132700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-132700 --alsologtostderr -v=3: (12.5965273s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-486300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-486300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.9993241s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-486300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-486300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-486300 --alsologtostderr -v=3: (12.497049s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-132700 -n no-preload-132700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-132700 -n no-preload-132700: exit status 7 (342.8424ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-132700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-477900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f2de104-9081-453f-8819-229973d9e425] Pending
helpers_test.go:344: "busybox" [4f2de104-9081-453f-8819-229973d9e425] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4f2de104-9081-453f-8819-229973d9e425] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.010283s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-477900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (292.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-132700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-132700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (4m51.308302s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-132700 -n no-preload-132700
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (292.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-844500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f8388e7-7448-4b93-a03a-fc82352e5b34] Pending
helpers_test.go:344: "busybox" [4f8388e7-7448-4b93-a03a-fc82352e5b34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4f8388e7-7448-4b93-a03a-fc82352e5b34] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0095613s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-844500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-486300 -n embed-certs-486300: exit status 7 (335.7149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-486300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (315.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-486300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-486300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (5m14.252323s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-486300 -n embed-certs-486300: (1.46138s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (315.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-477900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-477900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.2418604s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-477900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-844500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-844500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3142554s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-844500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-477900 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-477900 --alsologtostderr -v=3: (13.1401469s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-844500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-844500 --alsologtostderr -v=3: (13.1193989s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: exit status 7 (395.2042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-477900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-477900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-477900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (5m5.2878964s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: (1.0628937s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (306.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-844500 -n old-k8s-version-844500: exit status 7 (412.2677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-844500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (344.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-844500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0917 18:27:07.642093    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:27:18.858559    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:30:44.556248    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-844500 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (5m43.1597115s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-844500 -n old-k8s-version-844500: (1.163804s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (344.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wllhk" [a379427f-da20-46a3-9514-6b20710a37e7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0079868s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wllhk" [a379427f-da20-46a3-9514-6b20710a37e7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0108102s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-132700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-132700 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-132700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-132700 --alsologtostderr -v=1: (1.6294075s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-132700 -n no-preload-132700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-132700 -n no-preload-132700: exit status 2 (975.2701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-132700 -n no-preload-132700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-132700 -n no-preload-132700: exit status 2 (960.8006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-132700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-132700 --alsologtostderr -v=1: (1.4846407s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-132700 -n no-preload-132700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-132700 -n no-preload-132700: (1.3724645s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-132700 -n no-preload-132700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-132700 -n no-preload-132700: (1.0598685s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (86.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-092500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-092500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (1m26.4113506s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (86.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rtpm6" [f086ddb4-0df3-4d54-9745-eae64f3d38cf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006813s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rtpm6" [f086ddb4-0df3-4d54-9745-eae64f3d38cf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0108463s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-486300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-486300 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g4k5z" [99df27c8-257b-4673-825b-974f68b46a09] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0094756s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-486300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-486300 --alsologtostderr -v=1: (1.6857769s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-486300 -n embed-certs-486300: exit status 2 (1.0384362s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-486300 -n embed-certs-486300: exit status 2 (954.6366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-486300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-486300 --alsologtostderr -v=1: (1.4298239s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-486300 -n embed-certs-486300: (1.6053499s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-486300 -n embed-certs-486300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-486300 -n embed-certs-486300: (1.732007s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g4k5z" [99df27c8-257b-4673-825b-974f68b46a09] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.1592493s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-477900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-477900 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-477900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-477900 --alsologtostderr -v=1: (2.4553094s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: exit status 2 (1.1649228s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: exit status 2 (1.1212902s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-477900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-477900 --alsologtostderr -v=1: (1.7884856s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: (1.6363131s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-477900 -n default-k8s-diff-port-477900: (1.2010891s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (108.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m48.4124092s)
--- PASS: TestNetworkPlugins/group/auto/Start (108.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (118.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E0917 18:32:18.861155    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m58.3033906s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (118.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lspw4" [1437667a-cbda-462f-beb3-3580c0f5f775] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0112242s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lspw4" [1437667a-cbda-462f-beb3-3580c0f5f775] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0104472s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-844500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-844500 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-844500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-844500 --alsologtostderr -v=1: (2.705355s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-844500 -n old-k8s-version-844500: exit status 2 (1.1575216s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-844500 -n old-k8s-version-844500: exit status 2 (1.1180991s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-844500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-844500 --alsologtostderr -v=1: (1.6940207s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-844500 -n old-k8s-version-844500: (1.7937781s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-844500 -n old-k8s-version-844500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-844500 -n old-k8s-version-844500: (1.3284228s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (9.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-092500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-092500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.5209089s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (177.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m57.7943457s)
--- PASS: TestNetworkPlugins/group/calico/Start (177.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (15.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-092500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-092500 --alsologtostderr -v=3: (15.475683s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (15.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-092500 -n newest-cni-092500: exit status 7 (365.8891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-092500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-092500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-092500 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (34.368711s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-092500 -n newest-cni-092500: (1.2768975s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-092500 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (10.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-092500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-092500 --alsologtostderr -v=1: (2.3093704s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-092500 -n newest-cni-092500: exit status 2 (1.3264306s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-092500 -n newest-cni-092500: exit status 2 (1.3829003s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-092500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-092500 --alsologtostderr -v=1: (2.3872124s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-092500 -n newest-cni-092500: (1.901074s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-092500 -n newest-cni-092500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-092500 -n newest-cni-092500: (1.4143348s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (10.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-762300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m9hn7" [1a9e9513-0e77-4106-867a-adf60f1cb8af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m9hn7" [1a9e9513-0e77-4106-867a-adf60f1cb8af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 21.0116997s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (120.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (2m0.3151724s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (120.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-6lb6c" [3cd221ad-07ef-47a5-b425-61f8a4c0657c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0112236s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-762300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (28.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s7dtf" [7a81bfe0-2053-4aa6-b332-d2d333219e7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s7dtf" [7a81bfe0-2053-4aa6-b332-d2d333219e7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 28.008153s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (28.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (115.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m55.6228141s)
--- PASS: TestNetworkPlugins/group/false/Start (115.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (130.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E0917 18:35:40.982948    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:35:41.625345    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:35:42.907697    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:35:44.559217    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:35:45.470315    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m10.5574276s)
--- PASS: TestNetworkPlugins/group/flannel/Start (130.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lgw6h" [eafcdbb1-6adb-4327-a69d-1809dd96b8d8] Running
E0917 18:35:50.592678    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0147279s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-762300 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-762300 "pgrep -a kubelet": (1.0949878s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (24.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-flc2x" [d53551c5-bf8b-458d-9af7-8f48779d12d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:36:00.834493    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:05.902327    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:05.909076    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:05.921140    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:05.943058    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:05.985006    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:06.067743    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:06.229678    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-flc2x" [d53551c5-bf8b-458d-9af7-8f48779d12d8] Running
E0917 18:36:15.084258    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:16.161816    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:20.208121    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 24.0129362s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (24.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-762300 "pgrep -a kubelet"
E0917 18:36:06.552089    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:07.194198    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-762300 "pgrep -a kubelet": (1.0043202s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (21.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-762300 replace --force -f testdata\netcat-deployment.yaml
E0917 18:36:08.476519    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-762300 replace --force -f testdata\netcat-deployment.yaml: (1.3829174s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vcg92" [6940ba8e-1b5e-4809-b897-7be6a097582d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:36:09.940029    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:09.949737    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:09.962962    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:09.987503    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:10.029790    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:10.111958    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:10.274460    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:10.596499    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:11.038835    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:11.238626    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:36:12.521022    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vcg92" [6940ba8e-1b5e-4809-b897-7be6a097582d] Running
E0917 18:36:26.405530    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 20.0141533s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (21.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0917 18:36:21.317430    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0917 18:36:30.450842    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-844500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-762300 "pgrep -a kubelet"
E0917 18:37:02.280174    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-762300 "pgrep -a kubelet": (1.2699714s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (23.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7j9xh" [0c949cf8-7a4b-40fc-978b-2645a3b64f6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:37:18.865220    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-388800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7j9xh" [0c949cf8-7a4b-40fc-978b-2645a3b64f6d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 23.0192598s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (23.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0917 18:37:27.850861    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-477900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (140.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (2m20.5947848s)
--- PASS: TestNetworkPlugins/group/bridge/Start (140.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (134.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (2m14.7700809s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (134.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-s2694" [3cdac877-6c66-4015-910f-c17745ce5937] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0112904s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-762300 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-762300 "pgrep -a kubelet": (1.0588853s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (34.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d6ksq" [b74f613f-a756-4a8d-b0c8-1344210e8593] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d6ksq" [b74f613f-a756-4a8d-b0c8-1344210e8593] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 34.0150524s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (34.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (125.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-762300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m5.5633829s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (125.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-762300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (21.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-762300 replace --force -f testdata\netcat-deployment.yaml
E0917 18:39:51.581655    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\kindnet-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9dvmf" [dba5bae3-bd37-4105-8fa2-97619d1adcaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9dvmf" [dba5bae3-bd37-4105-8fa2-97619d1adcaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 21.0083456s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (21.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-762300 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-762300 "pgrep -a kubelet": (1.3593254s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-762300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-762300 replace --force -f testdata\netcat-deployment.yaml: (1.0375458s)
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f87v4" [edf432d0-ae02-49e9-88bd-82a6545685ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f87v4" [edf432d0-ae02-49e9-88bd-82a6545685ba] Running
E0917 18:40:10.936376    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 19.0092908s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-762300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-762300 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-762300 "pgrep -a kubelet": (1.116538s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (20.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-762300 replace --force -f testdata\netcat-deployment.yaml
E0917 18:40:32.544750    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\kindnet-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bsxqs" [98407a2c-b4b6-4f1a-a6f6-ac3ffd9cd9ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:40:40.335613    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-132700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:44.562511    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-000400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-bsxqs" [98407a2c-b4b6-4f1a-a6f6-ac3ffd9cd9ed] Running
E0917 18:40:48.287268    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.294067    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.305641    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.328253    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.371248    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.453434    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.615275    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:48.937830    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:49.580182    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0917 18:40:50.861957    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 20.0119759s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (20.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-762300 exec deployment/netcat -- nslookup kubernetes.default
E0917 18:40:53.424859    2968 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\calico-762300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-762300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.34s)

                                                
                                    

Test skip (24/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-000400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-000400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-000400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9d7d403c-941a-4c2d-a221-080f2b22b864] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9d7d403c-941a-4c2d-a221-080f2b22b864] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.0075516s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-000400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (19.44s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-388800 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-388800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 5756: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-388800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-388800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-sqjh8" [1bcd4cd8-2f82-4325-9876-a1da8e76ec8c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-sqjh8" [1bcd4cd8-2f82-4325-9876-a1da8e76ec8c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.1081796s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (26.53s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-214500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-214500
--- SKIP: TestStartStop/group/disable-driver-mounts (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (15.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-762300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-762300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:17:32 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://127.0.0.1:59168
name: kubernetes-upgrade-849400
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:19:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://127.0.0.1:59338
name: missing-upgrade-465800
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:19:08 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://127.0.0.1:59135
name: running-upgrade-402600
contexts:
- context:
cluster: kubernetes-upgrade-849400
user: kubernetes-upgrade-849400
name: kubernetes-upgrade-849400
- context:
cluster: missing-upgrade-465800
user: missing-upgrade-465800
name: missing-upgrade-465800
- context:
cluster: running-upgrade-402600
user: running-upgrade-402600
name: running-upgrade-402600
current-context: running-upgrade-402600
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-849400
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-849400/client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\kubernetes-upgrade-849400/client.key
- name: missing-upgrade-465800
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\missing-upgrade-465800\client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\missing-upgrade-465800\client.key
- name: running-upgrade-402600
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\running-upgrade-402600/client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\running-upgrade-402600/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-762300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-762300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-762300"

                                                
                                                
----------------------- debugLogs end: cilium-762300 [took: 14.571344s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-762300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-762300
--- SKIP: TestNetworkPlugins/group/cilium (15.34s)

                                                
                                    
Copied to clipboard