Test Report: Docker_Windows 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (3/340)

Order failed test Duration
33 TestAddons/parallel/Registry 76.69
56 TestErrorSpam/setup 65.83
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.18
x
+
TestAddons/parallel/Registry (76.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 7.2795ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-wm6fs" [a7194b70-c1f2-4046-a1e4-d8de0f1a5fff] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014255s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rdwl4" [19b0cfa9-36e8-495f-b934-367e8398b5da] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010763s
addons_test.go:342: (dbg) Run:  kubectl --context addons-291300 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-291300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-291300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.2382253s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-291300 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-291300
helpers_test.go:235: (dbg) docker inspect addons-291300:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a",
	        "Created": "2024-09-15T06:32:33.691993252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 31624,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:32:40.546026293Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a/hostname",
	        "HostsPath": "/var/lib/docker/containers/da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a/hosts",
	        "LogPath": "/var/lib/docker/containers/da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a/da0a6e829194f059df62a540bec74d80f82ebfebc88876aebd3c3330bfc7815a-json.log",
	        "Name": "/addons-291300",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-291300:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-291300",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8c5cc39629b071f7e45d851d2f9662a3b3ee71614e71a3678bd81a06f6b9b9d-init/diff:/var/lib/docker/overlay2/088094ea3ec63a034bad03ae1c40688e7addaaacd3a78b61d75b8c492a19f093/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8c5cc39629b071f7e45d851d2f9662a3b3ee71614e71a3678bd81a06f6b9b9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8c5cc39629b071f7e45d851d2f9662a3b3ee71614e71a3678bd81a06f6b9b9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8c5cc39629b071f7e45d851d2f9662a3b3ee71614e71a3678bd81a06f6b9b9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-291300",
	                "Source": "/var/lib/docker/volumes/addons-291300/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-291300",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-291300",
	                "name.minikube.sigs.k8s.io": "addons-291300",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6abdbba3f531f812a8bf97ddadf70e665d0c756cae4996a33b9cd1246cb6acef",
	            "SandboxKey": "/var/run/docker/netns/6abdbba3f531",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64886"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64887"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64888"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64889"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "64885"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-291300": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "4dcc9789e53703e0f01de6041eb025ba3206a2454f9d442e200676300a8a0c8c",
	                    "EndpointID": "aa643783ec6d8b741324ab16fc1622e435dabb2dfd7d28c8e567c548a0645e6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-291300",
	                        "da0a6e829194"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-291300 -n addons-291300
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 logs -n 25: (2.507248s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-425100   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-425100                                                                     |                        |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-425100                                                                     | download-only-425100   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-216600                                                                     | download-only-216600   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| delete  | -p download-only-425100                                                                     | download-only-425100   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | --download-only -p                                                                          | download-docker-561200 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | download-docker-561200                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p download-docker-561200                                                                   | download-docker-561200 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-364200   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | binary-mirror-364200                                                                        |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |                   |         |                     |                     |
	|         | http://127.0.0.1:64816                                                                      |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p binary-mirror-364200                                                                     | binary-mirror-364200   | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:30 UTC |
	| addons  | disable dashboard -p                                                                        | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-291300                                                                               |                        |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC |                     |
	|         | addons-291300                                                                               |                        |                   |         |                     |                     |
	| start   | -p addons-291300 --wait=true                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:30 UTC | 15 Sep 24 06:39 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                        |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |                   |         |                     |                     |
	|         | --driver=docker --addons=ingress                                                            |                        |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons disable                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:40 UTC | 15 Sep 24 06:40 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons                                                                        | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:48 UTC | 15 Sep 24 06:48 UTC |
	|         | disable metrics-server                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:48 UTC | 15 Sep 24 06:49 UTC |
	|         | addons-291300                                                                               |                        |                   |         |                     |                     |
	| ssh     | addons-291300 ssh cat                                                                       | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:48 UTC | 15 Sep 24 06:48 UTC |
	|         | /opt/local-path-provisioner/pvc-0c1fc810-3bbb-4ea2-b810-dcc494d81a59_default_test-pvc/file1 |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons disable                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:48 UTC | 15 Sep 24 06:48 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons disable                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |                   |         |                     |                     |
	| ssh     | addons-291300 ssh curl -s                                                                   | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | -p addons-291300                                                                            |                        |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | addons-291300                                                                               |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons disable                                                                | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | -p addons-291300                                                                            |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons                                                                        | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-291300 addons                                                                        | addons-291300          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:49 UTC | 15 Sep 24 06:49 UTC |
	|         | disable volumesnapshots                                                                     |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:30:14
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:30:14.061116    1828 out.go:345] Setting OutFile to fd 276 ...
	I0915 06:30:14.140977    1828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:14.140977    1828 out.go:358] Setting ErrFile to fd 852...
	I0915 06:30:14.140977    1828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:30:14.164846    1828 out.go:352] Setting JSON to false
	I0915 06:30:14.167834    1828 start.go:129] hostinfo: {"hostname":"minikube2","uptime":5587,"bootTime":1726376226,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:30:14.167834    1828 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:30:14.205506    1828 out.go:177] * [addons-291300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:30:14.210075    1828 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:30:14.210075    1828 notify.go:220] Checking for updates...
	I0915 06:30:14.215798    1828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:30:14.219206    1828 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:30:14.221952    1828 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:30:14.224557    1828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:30:14.227752    1828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:30:14.425015    1828 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:30:14.433292    1828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:30:14.772645    1828 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:30:14.744219364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:30:14.776505    1828 out.go:177] * Using the docker driver based on user configuration
	I0915 06:30:14.781995    1828 start.go:297] selected driver: docker
	I0915 06:30:14.781995    1828 start.go:901] validating driver "docker" against <nil>
	I0915 06:30:14.781995    1828 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:30:14.843702    1828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:30:15.174254    1828 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:30:15.14808527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:30:15.175254    1828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:30:15.176264    1828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:30:15.181662    1828 out.go:177] * Using Docker Desktop driver with root privileges
	I0915 06:30:15.184264    1828 cni.go:84] Creating CNI manager for ""
	I0915 06:30:15.184264    1828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:30:15.184264    1828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:30:15.184264    1828 start.go:340] cluster config:
	{Name:addons-291300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-291300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:30:15.187254    1828 out.go:177] * Starting "addons-291300" primary control-plane node in "addons-291300" cluster
	I0915 06:30:15.190257    1828 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:30:15.192263    1828 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:30:15.196264    1828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:30:15.196264    1828 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:30:15.196264    1828 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:30:15.196264    1828 cache.go:56] Caching tarball of preloaded images
	I0915 06:30:15.196264    1828 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 06:30:15.197276    1828 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 06:30:15.197276    1828 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\config.json ...
	I0915 06:30:15.198264    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\config.json: {Name:mkbac20f49a7da7eeb22c4a07f2d01f0e509a4b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:30:15.270949    1828 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:30:15.270949    1828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:30:15.270949    1828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:30:15.270949    1828 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:30:15.270949    1828 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:30:15.270949    1828 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:30:15.272187    1828 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:30:15.272289    1828 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:30:15.272289    1828 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:31:26.027474    1828 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:31:26.027547    1828 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:31:26.027669    1828 start.go:360] acquireMachinesLock for addons-291300: {Name:mk7e5f16b8daeac3adc11e34ca3194d55df40cae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:31:26.027917    1828 start.go:364] duration metric: took 162.3µs to acquireMachinesLock for "addons-291300"
	I0915 06:31:26.027917    1828 start.go:93] Provisioning new machine with config: &{Name:addons-291300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-291300 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:31:26.027917    1828 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:31:26.033947    1828 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:31:26.033947    1828 start.go:159] libmachine.API.Create for "addons-291300" (driver="docker")
	I0915 06:31:26.034495    1828 client.go:168] LocalClient.Create starting
	I0915 06:31:26.035495    1828 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem
	I0915 06:31:26.247738    1828 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem
	I0915 06:31:26.724263    1828 cli_runner.go:164] Run: docker network inspect addons-291300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:31:26.797260    1828 cli_runner.go:211] docker network inspect addons-291300 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:31:26.803294    1828 network_create.go:284] running [docker network inspect addons-291300] to gather additional debugging logs...
	I0915 06:31:26.803294    1828 cli_runner.go:164] Run: docker network inspect addons-291300
	W0915 06:31:26.868100    1828 cli_runner.go:211] docker network inspect addons-291300 returned with exit code 1
	I0915 06:31:26.868100    1828 network_create.go:287] error running [docker network inspect addons-291300]: docker network inspect addons-291300: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-291300 not found
	I0915 06:31:26.868100    1828 network_create.go:289] output of [docker network inspect addons-291300]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-291300 not found
	
	** /stderr **
	I0915 06:31:26.877639    1828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:31:26.969319    1828 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000644780}
	I0915 06:31:26.969418    1828 network_create.go:124] attempt to create docker network addons-291300 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:31:26.979562    1828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-291300 addons-291300
	I0915 06:31:27.179716    1828 network_create.go:108] docker network addons-291300 192.168.49.0/24 created
	I0915 06:31:27.179716    1828 kic.go:121] calculated static IP "192.168.49.2" for the "addons-291300" container
	I0915 06:31:27.194742    1828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:31:27.275229    1828 cli_runner.go:164] Run: docker volume create addons-291300 --label name.minikube.sigs.k8s.io=addons-291300 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:31:27.349885    1828 oci.go:103] Successfully created a docker volume addons-291300
	I0915 06:31:27.360222    1828 cli_runner.go:164] Run: docker run --rm --name addons-291300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291300 --entrypoint /usr/bin/test -v addons-291300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:31:54.759282    1828 cli_runner.go:217] Completed: docker run --rm --name addons-291300-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291300 --entrypoint /usr/bin/test -v addons-291300:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (27.3987035s)
	I0915 06:31:54.759282    1828 oci.go:107] Successfully prepared a docker volume addons-291300
	I0915 06:31:54.759282    1828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:31:54.759282    1828 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:31:54.773206    1828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-291300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:32:32.962651    1828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-291300:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (38.1891447s)
	I0915 06:32:32.962651    1828 kic.go:203] duration metric: took 38.203068s to extract preloaded images to volume ...
	I0915 06:32:32.973312    1828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:32:33.295269    1828 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-09-15 06:32:33.267965559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:32:33.303280    1828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:32:33.620296    1828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-291300 --name addons-291300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-291300 --network addons-291300 --ip 192.168.49.2 --volume addons-291300:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:32:41.175370    1828 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-291300 --name addons-291300 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-291300 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-291300 --network addons-291300 --ip 192.168.49.2 --volume addons-291300:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0: (7.5550145s)
	I0915 06:32:41.188376    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Running}}
	I0915 06:32:41.394192    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:32:41.569008    1828 cli_runner.go:164] Run: docker exec addons-291300 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:32:41.921125    1828 oci.go:144] the created container "addons-291300" has a running status.
	I0915 06:32:41.921125    1828 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa...
	I0915 06:32:42.303709    1828 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:32:43.039331    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:32:43.207827    1828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:32:43.207827    1828 kic_runner.go:114] Args: [docker exec --privileged addons-291300 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:32:43.518243    1828 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa...
	I0915 06:32:47.362208    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:32:47.531203    1828 machine.go:93] provisionDockerMachine start ...
	I0915 06:32:47.546207    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:47.718948    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:47.733935    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:47.733935    1828 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:32:47.955937    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-291300
	
	I0915 06:32:47.955937    1828 ubuntu.go:169] provisioning hostname "addons-291300"
	I0915 06:32:47.971951    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:48.135180    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:48.135180    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:48.135180    1828 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-291300 && echo "addons-291300" | sudo tee /etc/hostname
	I0915 06:32:48.379017    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-291300
	
	I0915 06:32:48.395033    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:48.569015    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:48.569015    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:48.569015    1828 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-291300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-291300/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-291300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:32:48.765345    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:32:48.765345    1828 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0915 06:32:48.765345    1828 ubuntu.go:177] setting up certificates
	I0915 06:32:48.765345    1828 provision.go:84] configureAuth start
	I0915 06:32:48.778308    1828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291300
	I0915 06:32:48.942568    1828 provision.go:143] copyHostCerts
	I0915 06:32:48.943561    1828 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 06:32:48.947544    1828 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 06:32:48.949544    1828 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 06:32:48.950541    1828 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-291300 san=[127.0.0.1 192.168.49.2 addons-291300 localhost minikube]
	I0915 06:32:49.956956    1828 provision.go:177] copyRemoteCerts
	I0915 06:32:49.975963    1828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:32:49.990955    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:50.161952    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:32:50.319428    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:32:50.392667    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:32:50.465255    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 06:32:50.513898    1828 provision.go:87] duration metric: took 1.7475659s to configureAuth
	I0915 06:32:50.513898    1828 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:32:50.514904    1828 config.go:182] Loaded profile config "addons-291300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:32:50.521906    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:50.605797    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:50.606403    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:50.606403    1828 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 06:32:50.835875    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 06:32:50.835875    1828 ubuntu.go:71] root file system type: overlay
	I0915 06:32:50.836830    1828 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 06:32:50.852575    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:51.032056    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:51.032056    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:51.032056    1828 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 06:32:51.313046    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 06:32:51.325053    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:51.503298    1828 main.go:141] libmachine: Using SSH client type: native
	I0915 06:32:51.504419    1828 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 64886 <nil> <nil>}
	I0915 06:32:51.504419    1828 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 06:32:53.379903    1828 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-15 06:32:51.295917834 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0915 06:32:53.379903    1828 machine.go:96] duration metric: took 5.8486535s to provisionDockerMachine
	I0915 06:32:53.379903    1828 client.go:171] duration metric: took 1m27.3447214s to LocalClient.Create
	I0915 06:32:53.379903    1828 start.go:167] duration metric: took 1m27.3452701s to libmachine.API.Create "addons-291300"
	I0915 06:32:53.379903    1828 start.go:293] postStartSetup for "addons-291300" (driver="docker")
	I0915 06:32:53.379903    1828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:32:53.390902    1828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:32:53.398900    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:53.466909    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:32:53.647945    1828 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:32:53.659919    1828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:32:53.659919    1828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:32:53.659919    1828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:32:53.659919    1828 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:32:53.659919    1828 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0915 06:32:53.660940    1828 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0915 06:32:53.660940    1828 start.go:296] duration metric: took 281.0353ms for postStartSetup
	I0915 06:32:53.676954    1828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291300
	I0915 06:32:53.860009    1828 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\config.json ...
	I0915 06:32:53.885002    1828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:32:53.895949    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:54.074302    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:32:54.238326    1828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:32:54.254346    1828 start.go:128] duration metric: took 1m28.225736s to createHost
	I0915 06:32:54.254346    1828 start.go:83] releasing machines lock for "addons-291300", held for 1m28.225736s
	I0915 06:32:54.264309    1828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-291300
	I0915 06:32:54.455360    1828 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0915 06:32:54.472863    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:54.473892    1828 ssh_runner.go:195] Run: cat /version.json
	I0915 06:32:54.493875    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:32:54.654671    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:32:54.664683    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	W0915 06:32:54.793327    1828 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0915 06:32:54.831291    1828 ssh_runner.go:195] Run: systemctl --version
	I0915 06:32:54.870253    1828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:32:54.907247    1828 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0915 06:32:54.940252    1828 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0915 06:32:54.959269    1828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0915 06:32:55.025164    1828 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0915 06:32:55.025164    1828 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0915 06:32:55.062125    1828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:32:55.062125    1828 start.go:495] detecting cgroup driver to use...
	I0915 06:32:55.062125    1828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:32:55.063121    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:32:55.125114    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 06:32:55.180184    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 06:32:55.209171    1828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 06:32:55.231057    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 06:32:55.280058    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:32:55.333246    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 06:32:55.383250    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:32:55.430256    1828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:32:55.488376    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 06:32:55.544218    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 06:32:55.595387    1828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 06:32:55.643391    1828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:32:55.687368    1828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:32:55.735404    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:55.951456    1828 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 06:32:56.256305    1828 start.go:495] detecting cgroup driver to use...
	I0915 06:32:56.256305    1828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:32:56.276267    1828 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 06:32:56.304266    1828 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0915 06:32:56.320276    1828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 06:32:56.349302    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:32:56.421344    1828 ssh_runner.go:195] Run: which cri-dockerd
	I0915 06:32:56.456947    1828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 06:32:56.488958    1828 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 06:32:56.557954    1828 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 06:32:56.743344    1828 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 06:32:56.991655    1828 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 06:32:56.991655    1828 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 06:32:57.063668    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:57.291676    1828 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 06:32:58.300735    1828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 06:32:58.349735    1828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:32:58.398735    1828 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 06:32:58.639321    1828 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 06:32:58.859357    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:59.077290    1828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 06:32:59.130286    1828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:32:59.179312    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:32:59.387165    1828 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 06:32:59.611498    1828 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 06:32:59.633490    1828 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 06:32:59.645516    1828 start.go:563] Will wait 60s for crictl version
	I0915 06:32:59.664497    1828 ssh_runner.go:195] Run: which crictl
	I0915 06:32:59.699753    1828 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:32:59.802761    1828 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 06:32:59.815341    1828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:32:59.902407    1828 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:32:59.985375    1828 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 06:32:59.998343    1828 cli_runner.go:164] Run: docker exec -t addons-291300 dig +short host.docker.internal
	I0915 06:33:00.397957    1828 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0915 06:33:00.416977    1828 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0915 06:33:00.427963    1828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:33:00.471078    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:00.538129    1828 kubeadm.go:883] updating cluster {Name:addons-291300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-291300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:33:00.539086    1828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:33:00.546073    1828 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:33:00.590701    1828 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:33:00.590701    1828 docker.go:615] Images already preloaded, skipping extraction
	I0915 06:33:00.599366    1828 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:33:00.646258    1828 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:33:00.646864    1828 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:33:00.646936    1828 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0915 06:33:00.647214    1828 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-291300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-291300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:33:00.655009    1828 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 06:33:00.757804    1828 cni.go:84] Creating CNI manager for ""
	I0915 06:33:00.757804    1828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:33:00.757804    1828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:33:00.757804    1828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-291300 NodeName:addons-291300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:33:00.757804    1828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-291300"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:33:00.769853    1828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:33:00.792936    1828 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:33:00.805489    1828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:33:00.828224    1828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0915 06:33:00.864236    1828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:33:00.907244    1828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0915 06:33:00.975237    1828 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:33:00.990267    1828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:33:01.049687    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:33:01.273472    1828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:33:01.317767    1828 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300 for IP: 192.168.49.2
	I0915 06:33:01.317827    1828 certs.go:194] generating shared ca certs ...
	I0915 06:33:01.317827    1828 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:01.318440    1828 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0915 06:33:01.734173    1828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt ...
	I0915 06:33:01.734173    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt: {Name:mkc5b851ca682f7aff857055d591694d36175fe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:01.736189    1828 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key ...
	I0915 06:33:01.736189    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key: {Name:mk9089fc50aceda2aa3f2747811085b675041b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:01.737212    1828 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0915 06:33:01.899611    1828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0915 06:33:01.899611    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkd5c7d70e5d33d063f91e60ee9bd4852fbc5909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:01.901570    1828 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key ...
	I0915 06:33:01.901570    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkbb7b28a2f5e99a3e449ce85c8a848dee712fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:01.903564    1828 certs.go:256] generating profile certs ...
	I0915 06:33:01.904548    1828 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.key
	I0915 06:33:01.904548    1828 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.crt with IP's: []
	I0915 06:33:02.566634    1828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.crt ...
	I0915 06:33:02.566634    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.crt: {Name:mk4cbc5cd109476bc04e62a76130d07366d63e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:02.568627    1828 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.key ...
	I0915 06:33:02.568627    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\client.key: {Name:mk910ae8dfa95ecba026ec360fe24aad0dfc5665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:02.569627    1828 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key.61c5af6e
	I0915 06:33:02.569627    1828 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt.61c5af6e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:33:03.090252    1828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt.61c5af6e ...
	I0915 06:33:03.090252    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt.61c5af6e: {Name:mk10914c137c6064adb6c5ad73c789109f581599 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:03.092256    1828 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key.61c5af6e ...
	I0915 06:33:03.092256    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key.61c5af6e: {Name:mkec0e9ecb572d3db0a3ded230be872ea2782965 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:03.093469    1828 certs.go:381] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt.61c5af6e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt
	I0915 06:33:03.105455    1828 certs.go:385] copying C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key.61c5af6e -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key
	I0915 06:33:03.106459    1828 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.key
	I0915 06:33:03.106459    1828 crypto.go:68] Generating cert C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.crt with IP's: []
	I0915 06:33:03.299290    1828 crypto.go:156] Writing cert to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.crt ...
	I0915 06:33:03.299290    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.crt: {Name:mk10063e3f0bf022da53255c10da8a3707e01d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:03.300276    1828 crypto.go:164] Writing key to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.key ...
	I0915 06:33:03.300276    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.key: {Name:mk793d422652eeb8b027bca3b053bca0825c5eed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:03.312887    1828 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0915 06:33:03.313729    1828 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 06:33:03.313729    1828 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 06:33:03.314369    1828 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0915 06:33:03.315559    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:33:03.373018    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:33:03.419247    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:33:03.471903    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:33:03.521460    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:33:03.574747    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:33:03.627312    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:33:03.675127    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\addons-291300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:33:03.724507    1828 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:33:03.769431    1828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:33:03.817984    1828 ssh_runner.go:195] Run: openssl version
	I0915 06:33:03.847422    1828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:33:03.882136    1828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:33:03.894234    1828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:33:03.906164    1828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:33:03.930157    1828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:33:03.965188    1828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:33:03.975630    1828 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:33:03.975630    1828 kubeadm.go:392] StartCluster: {Name:addons-291300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-291300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:33:03.984697    1828 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 06:33:04.037632    1828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:33:04.076086    1828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:33:04.096278    1828 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:33:04.108785    1828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:33:04.129397    1828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:33:04.129397    1828 kubeadm.go:157] found existing configuration files:
	
	I0915 06:33:04.142534    1828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:33:04.165915    1828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:33:04.176735    1828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:33:04.214517    1828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:33:04.235769    1828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:33:04.249399    1828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:33:04.284146    1828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:33:04.307057    1828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:33:04.320005    1828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:33:04.353703    1828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:33:04.375225    1828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:33:04.388745    1828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:33:04.410375    1828 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:33:04.480799    1828 kubeadm.go:310] W0915 06:33:04.478289    1979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:33:04.481942    1828 kubeadm.go:310] W0915 06:33:04.479328    1979 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:33:04.517417    1828 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0915 06:33:04.691232    1828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:33:19.019098    1828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:33:19.019098    1828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:33:19.019825    1828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:33:19.020177    1828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:33:19.020433    1828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:33:19.020614    1828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:33:19.025125    1828 out.go:235]   - Generating certificates and keys ...
	I0915 06:33:19.025337    1828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:33:19.025596    1828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:33:19.025815    1828 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:33:19.025929    1828 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:33:19.026107    1828 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:33:19.026308    1828 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:33:19.026487    1828 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:33:19.026819    1828 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-291300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:33:19.026964    1828 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:33:19.027350    1828 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-291300 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:33:19.027569    1828 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:33:19.027569    1828 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:33:19.027569    1828 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:33:19.027569    1828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:33:19.028172    1828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:33:19.028213    1828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:33:19.028213    1828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:33:19.028213    1828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:33:19.028736    1828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:33:19.028820    1828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:33:19.028820    1828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:33:19.038531    1828 out.go:235]   - Booting up control plane ...
	I0915 06:33:19.039399    1828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:33:19.039399    1828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:33:19.039399    1828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:33:19.040058    1828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:33:19.040156    1828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:33:19.040156    1828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:33:19.040808    1828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:33:19.040864    1828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:33:19.040864    1828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.004486299s
	I0915 06:33:19.040864    1828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:33:19.041444    1828 kubeadm.go:310] [api-check] The API server is healthy after 8.503622966s
	I0915 06:33:19.041610    1828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:33:19.041610    1828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:33:19.042434    1828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:33:19.042960    1828 kubeadm.go:310] [mark-control-plane] Marking the node addons-291300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:33:19.043042    1828 kubeadm.go:310] [bootstrap-token] Using token: eh4fnf.pm6ffynltyzz9cm2
	I0915 06:33:19.050687    1828 out.go:235]   - Configuring RBAC rules ...
	I0915 06:33:19.050803    1828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:33:19.050803    1828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:33:19.051657    1828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:33:19.051657    1828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:33:19.052720    1828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:33:19.052850    1828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:33:19.053244    1828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:33:19.053244    1828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:33:19.053244    1828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:33:19.053244    1828 kubeadm.go:310] 
	I0915 06:33:19.053244    1828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:33:19.053244    1828 kubeadm.go:310] 
	I0915 06:33:19.053852    1828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:33:19.053888    1828 kubeadm.go:310] 
	I0915 06:33:19.053888    1828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:33:19.053888    1828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:33:19.053888    1828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:33:19.053888    1828 kubeadm.go:310] 
	I0915 06:33:19.053888    1828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:33:19.053888    1828 kubeadm.go:310] 
	I0915 06:33:19.054580    1828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:33:19.054580    1828 kubeadm.go:310] 
	I0915 06:33:19.054580    1828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:33:19.054580    1828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:33:19.054580    1828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:33:19.054580    1828 kubeadm.go:310] 
	I0915 06:33:19.055393    1828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:33:19.055435    1828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:33:19.055435    1828 kubeadm.go:310] 
	I0915 06:33:19.055435    1828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eh4fnf.pm6ffynltyzz9cm2 \
	I0915 06:33:19.055435    1828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:09e42c7e3eca42386c49bf7c0727439f450cbfffbf5d831b4d4b78899be7bd33 \
	I0915 06:33:19.056063    1828 kubeadm.go:310] 	--control-plane 
	I0915 06:33:19.056063    1828 kubeadm.go:310] 
	I0915 06:33:19.056241    1828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:33:19.056241    1828 kubeadm.go:310] 
	I0915 06:33:19.056241    1828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eh4fnf.pm6ffynltyzz9cm2 \
	I0915 06:33:19.056241    1828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:09e42c7e3eca42386c49bf7c0727439f450cbfffbf5d831b4d4b78899be7bd33 
	I0915 06:33:19.056241    1828 cni.go:84] Creating CNI manager for ""
	I0915 06:33:19.056791    1828 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:33:19.060914    1828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0915 06:33:19.075907    1828 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0915 06:33:19.101478    1828 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0915 06:33:19.137796    1828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:33:19.155069    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-291300 minikube.k8s.io/updated_at=2024_09_15T06_33_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-291300 minikube.k8s.io/primary=true
	I0915 06:33:19.155069    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:19.158871    1828 ops.go:34] apiserver oom_adj: -16
	I0915 06:33:19.337882    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:19.837136    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:20.338836    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:20.837792    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:21.337538    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:21.834741    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:22.336807    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:22.835930    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:23.336314    1828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:33:23.503669    1828 kubeadm.go:1113] duration metric: took 4.3657467s to wait for elevateKubeSystemPrivileges
	I0915 06:33:23.503669    1828 kubeadm.go:394] duration metric: took 19.5278852s to StartCluster
	I0915 06:33:23.503669    1828 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:23.503669    1828 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:33:23.504693    1828 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:33:23.506672    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:33:23.506672    1828 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:33:23.506672    1828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:33:23.506672    1828 addons.go:69] Setting yakd=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting inspektor-gadget=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting storage-provisioner=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 config.go:182] Loaded profile config "addons-291300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:33:23.506672    1828 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting registry=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting default-storageclass=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting gcp-auth=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting volcano=true in profile "addons-291300"
	I0915 06:33:23.506672    1828 mustload.go:65] Loading cluster: addons-291300
	I0915 06:33:23.506672    1828 addons.go:234] Setting addon volcano=true in "addons-291300"
	I0915 06:33:23.506672    1828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-291300"
	I0915 06:33:23.506672    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:234] Setting addon inspektor-gadget=true in "addons-291300"
	I0915 06:33:23.507696    1828 addons.go:69] Setting volumesnapshots=true in profile "addons-291300"
	I0915 06:33:23.507696    1828 addons.go:234] Setting addon volumesnapshots=true in "addons-291300"
	I0915 06:33:23.507696    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.507696    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:69] Setting cloud-spanner=true in profile "addons-291300"
	I0915 06:33:23.507696    1828 config.go:182] Loaded profile config "addons-291300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:33:23.507696    1828 addons.go:234] Setting addon cloud-spanner=true in "addons-291300"
	I0915 06:33:23.507696    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:234] Setting addon registry=true in "addons-291300"
	I0915 06:33:23.508688    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:234] Setting addon yakd=true in "addons-291300"
	I0915 06:33:23.508688    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:234] Setting addon storage-provisioner=true in "addons-291300"
	I0915 06:33:23.508688    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-291300"
	I0915 06:33:23.508688    1828 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-291300"
	I0915 06:33:23.508688    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:69] Setting ingress=true in profile "addons-291300"
	I0915 06:33:23.509689    1828 addons.go:234] Setting addon ingress=true in "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting metrics-server=true in profile "addons-291300"
	I0915 06:33:23.509689    1828 addons.go:234] Setting addon metrics-server=true in "addons-291300"
	I0915 06:33:23.506672    1828 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-291300"
	I0915 06:33:23.507696    1828 addons.go:69] Setting ingress-dns=true in profile "addons-291300"
	I0915 06:33:23.509689    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.509689    1828 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-291300"
	I0915 06:33:23.509689    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.509689    1828 addons.go:234] Setting addon ingress-dns=true in "addons-291300"
	I0915 06:33:23.509689    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.506672    1828 addons.go:69] Setting helm-tiller=true in profile "addons-291300"
	I0915 06:33:23.510670    1828 addons.go:234] Setting addon helm-tiller=true in "addons-291300"
	I0915 06:33:23.510670    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.509689    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.515675    1828 out.go:177] * Verifying Kubernetes components...
	I0915 06:33:23.547684    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.550301    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.552486    1828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:33:23.553161    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.555378    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.556413    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.559369    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.559369    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.559369    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.560346    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.563349    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.564365    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.564365    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.566352    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.570459    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.589468    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.596454    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.708826    1828 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 06:33:23.713764    1828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:33:23.718764    1828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:33:23.718764    1828 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:33:23.718764    1828 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 06:33:23.722771    1828 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 06:33:23.729756    1828 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:33:23.729756    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 06:33:23.729756    1828 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-291300"
	I0915 06:33:23.730763    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.734773    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.740765    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.741756    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.744794    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:33:23.747754    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:33:23.747754    1828 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:33:23.748792    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:33:23.752809    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.754395    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:33:23.757769    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:33:23.764759    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.764759    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:33:23.767759    1828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:33:23.769772    1828 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:33:23.770760    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:33:23.772772    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.777093    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.777093    1828 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0915 06:33:23.781761    1828 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0915 06:33:23.782752    1828 addons.go:234] Setting addon default-storageclass=true in "addons-291300"
	I0915 06:33:23.784961    1828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:33:23.787760    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:33:23.787760    1828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:33:23.788763    1828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:33:23.789767    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.791757    1828 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0915 06:33:23.795820    1828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:33:23.797429    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:23.802455    1828 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:33:23.812195    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:33:23.807197    1828 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0915 06:33:23.812195    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0915 06:33:23.811357    1828 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:33:23.815196    1828 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:33:23.832196    1828 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:33:23.821230    1828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:33:23.833227    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:33:23.825204    1828 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:33:23.827203    1828 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:33:23.829372    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.830211    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.832196    1828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:23.835197    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:33:23.838195    1828 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:33:23.838195    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:33:23.842191    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:23.846198    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:33:23.847194    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.848199    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.861195    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.863186    1828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:33:23.864852    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.885049    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:33:23.885049    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:33:23.886918    1828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:33:23.890095    1828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:23.893059    1828 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:33:23.893059    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:33:23.893059    1828 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 64889 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0915 06:33:23.896073    1828 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0915 06:33:23.898079    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:23.905067    1828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:33:23.907075    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.910058    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.912082    1828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:33:23.916061    1828 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:33:23.916061    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:33:23.916061    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:23.933065    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.953078    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:23.959055    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:23.960043    1828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:33:23.964050    1828 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:33:23.966057    1828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:33:23.966057    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:33:23.988061    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:23.995048    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.010170    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.011058    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.012049    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.013051    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.018055    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.019061    1828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:33:24.019061    1828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:33:24.023051    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.030052    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:24.030052    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:24.050046    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	W0915 06:33:24.082048    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	W0915 06:33:24.082048    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:33:24.082048    1828 retry.go:31] will retry after 371.222945ms: ssh: handshake failed: EOF
	I0915 06:33:24.082048    1828 retry.go:31] will retry after 242.151756ms: ssh: handshake failed: EOF
	W0915 06:33:24.082048    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:33:24.082048    1828 retry.go:31] will retry after 180.052118ms: ssh: handshake failed: EOF
	I0915 06:33:24.086075    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	W0915 06:33:24.089079    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:33:24.089079    1828 retry.go:31] will retry after 129.151964ms: ssh: handshake failed: EOF
	I0915 06:33:24.106050    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	W0915 06:33:24.375949    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:33:24.375949    1828 retry.go:31] will retry after 388.851545ms: ssh: handshake failed: EOF
	W0915 06:33:24.479676    1828 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0915 06:33:24.479676    1828 retry.go:31] will retry after 399.772737ms: ssh: handshake failed: EOF
	I0915 06:33:24.780639    1828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:33:24.780639    1828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:33:24.978654    1828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.4719705s)
	I0915 06:33:24.979191    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:33:25.084331    1828 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.5318326s)
	I0915 06:33:25.086639    1828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:33:25.086639    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:33:25.102950    1828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:33:25.103963    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:33:25.181614    1828 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:33:25.181614    1828 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:33:25.200307    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:33:25.201305    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:33:25.281976    1828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:33:25.282104    1828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:33:25.282104    1828 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:33:25.282104    1828 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:33:25.304806    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:33:25.397107    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:33:25.682671    1828 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0915 06:33:25.682671    1828 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0915 06:33:25.701940    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:33:25.705930    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:33:25.785252    1828 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:33:25.785373    1828 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:33:25.785252    1828 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:33:25.785373    1828 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:33:25.800997    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:33:25.883906    1828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:33:25.884255    1828 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:33:25.985813    1828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:33:25.986914    1828 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:33:26.182013    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:33:26.182013    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:33:26.384951    1828 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:33:26.384988    1828 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0915 06:33:26.581055    1828 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:33:26.581285    1828 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:33:26.582195    1828 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:33:26.582195    1828 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:33:26.583148    1828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:33:26.583148    1828 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:33:26.583148    1828 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:33:26.584154    1828 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:33:26.883096    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:33:26.883184    1828 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:33:27.099647    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0915 06:33:27.281085    1828 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:33:27.281085    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:33:27.282610    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:33:27.282610    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:33:27.282610    1828 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:33:27.282859    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:33:27.297054    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:33:27.381916    1828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:33:27.381916    1828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:33:27.381916    1828 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:27.381916    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:33:27.999242    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:28.081928    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:33:28.081983    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:33:28.083958    1828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:33:28.083958    1828 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:33:28.102929    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:33:28.197294    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:33:28.581959    1828 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:33:28.582164    1828 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:33:28.681772    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:33:28.681772    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:33:29.281196    1828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:33:29.281196    1828 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:33:29.581190    1828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.6019632s)
	I0915 06:33:29.581521    1828 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0915 06:33:29.581616    1828 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:33:29.581616    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:33:30.180421    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:33:30.180421    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:33:30.396897    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:33:30.782435    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:33:30.782518    1828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:33:30.782693    1828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-291300" context rescaled to 1 replicas
	I0915 06:33:31.580284    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:33:31.580284    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:33:32.280785    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:33:32.280785    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:33:32.876598    1828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:33:32.876598    1828 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:33:33.397584    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:33:39.187872    1828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:33:39.196393    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:39.279061    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:40.475883    1828 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:33:40.782787    1828 addons.go:234] Setting addon gcp-auth=true in "addons-291300"
	I0915 06:33:40.782960    1828 host.go:66] Checking if "addons-291300" exists ...
	I0915 06:33:40.807368    1828 cli_runner.go:164] Run: docker container inspect addons-291300 --format={{.State.Status}}
	I0915 06:33:40.901785    1828 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:33:40.909215    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:40.989185    1828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64886 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\addons-291300\id_rsa Username:docker}
	I0915 06:33:54.785356    1828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (29.6821721s)
	I0915 06:33:54.785356    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (29.5848158s)
	I0915 06:33:54.785356    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (29.6811588s)
	I0915 06:33:54.785356    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (29.5838174s)
	I0915 06:33:54.785356    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (29.4803178s)
	I0915 06:33:54.785933    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (29.3880171s)
	I0915 06:33:54.785998    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (29.0798381s)
	I0915 06:33:54.785998    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (29.0838274s)
	I0915 06:33:54.785998    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (28.9847721s)
	I0915 06:33:54.786551    1828 addons.go:475] Verifying addon ingress=true in "addons-291300"
	I0915 06:33:54.786689    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (27.686764s)
	I0915 06:33:54.786772    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (27.4895014s)
	I0915 06:33:54.786772    1828 addons.go:475] Verifying addon metrics-server=true in "addons-291300"
	I0915 06:33:54.786772    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (26.7873184s)
	W0915 06:33:54.786772    1828 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:33:54.786772    1828 retry.go:31] will retry after 299.876029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:33:54.786772    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (26.6836321s)
	I0915 06:33:54.786772    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (26.5892678s)
	I0915 06:33:54.786772    1828 addons.go:475] Verifying addon registry=true in "addons-291300"
	I0915 06:33:54.786772    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (24.389683s)
	I0915 06:33:54.795875    1828 out.go:177] * Verifying ingress addon...
	I0915 06:33:54.797147    1828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-291300
	I0915 06:33:54.802650    1828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-291300 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:33:54.804696    1828 out.go:177] * Verifying registry addon...
	I0915 06:33:54.809927    1828 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:33:54.815424    1828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:33:54.881219    1828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:33:54.881219    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:54.882145    1828 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:33:54.882145    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:54.886324    1828 node_ready.go:35] waiting up to 6m0s for node "addons-291300" to be "Ready" ...
	W0915 06:33:54.983957    1828 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0915 06:33:54.984948    1828 node_ready.go:49] node "addons-291300" has status "Ready":"True"
	I0915 06:33:54.984948    1828 node_ready.go:38] duration metric: took 98.068ms for node "addons-291300" to be "Ready" ...
	I0915 06:33:54.984948    1828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:33:55.085418    1828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace to be "Ready" ...
	I0915 06:33:55.103271    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:33:55.481050    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:55.578619    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:55.878859    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:55.980832    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:56.477506    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:56.477723    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:56.876681    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:56.876986    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:56.883526    1828 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (15.981615s)
	I0915 06:33:56.883526    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (23.4857564s)
	I0915 06:33:56.883604    1828 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-291300"
	I0915 06:33:56.891448    1828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:33:56.897431    1828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:33:56.909117    1828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:33:56.910119    1828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:33:56.914833    1828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:33:56.914833    1828 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:33:56.978796    1828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:33:56.978993    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:57.279182    1828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:33:57.279182    1828 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:33:57.390986    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:57.393487    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:57.394082    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:57.484008    1828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:33:57.484041    1828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:33:57.493585    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:57.691552    1828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:33:57.878951    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:57.878951    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:57.979732    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:58.377930    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:58.378641    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:58.479723    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:58.877251    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:58.878003    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:58.979176    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:59.379749    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:59.380650    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:59.485162    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:33:59.686227    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:33:59.879417    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:33:59.880145    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:33:59.981671    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:00.380219    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:00.381221    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:00.481844    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:00.584610    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.4807378s)
	I0915 06:34:00.879905    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:00.881564    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:00.979557    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:01.396665    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:01.397227    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:01.477037    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:01.870164    1828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (4.1785792s)
	I0915 06:34:01.882404    1828 addons.go:475] Verifying addon gcp-auth=true in "addons-291300"
	I0915 06:34:01.887363    1828 out.go:177] * Verifying gcp-auth addon...
	I0915 06:34:01.889446    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:01.889934    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:01.894432    1828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:34:01.993507    1828 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:34:01.995747    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:02.101589    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:02.322812    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:02.325442    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:02.418100    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:02.822952    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:02.825955    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:02.930632    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:03.322096    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:03.324395    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:03.418192    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:03.822429    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:03.823721    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:03.919134    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:04.102298    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:04.321158    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:04.325645    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:04.417222    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:04.823742    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:04.823742    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:04.923233    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:05.322862    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:05.325767    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:05.418540    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:05.821121    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:05.823982    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:05.917285    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:06.321768    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:06.321768    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:06.418088    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:06.601184    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:06.820300    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:06.823764    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:06.919143    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:07.320713    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:07.326929    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:07.417561    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:07.822405    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:07.825123    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:07.916930    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:08.321122    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:08.326619    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:08.419410    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:08.601697    1828 pod_ready.go:103] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:08.822246    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:08.826525    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:08.922559    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:09.322147    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:09.325146    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:09.417554    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:09.604482    1828 pod_ready.go:93] pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.604482    1828 pod_ready.go:82] duration metric: took 14.5189486s for pod "coredns-7c65d6cfc9-b4jc9" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.604482    1828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.685727    1828 pod_ready.go:93] pod "etcd-addons-291300" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.685842    1828 pod_ready.go:82] duration metric: took 81.3597ms for pod "etcd-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.685913    1828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.771969    1828 pod_ready.go:93] pod "kube-apiserver-addons-291300" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.771969    1828 pod_ready.go:82] duration metric: took 86.056ms for pod "kube-apiserver-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.771969    1828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.793309    1828 pod_ready.go:93] pod "kube-controller-manager-addons-291300" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.793309    1828 pod_ready.go:82] duration metric: took 21.339ms for pod "kube-controller-manager-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.793373    1828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q7ggz" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.806665    1828 pod_ready.go:93] pod "kube-proxy-q7ggz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.806665    1828 pod_ready.go:82] duration metric: took 13.2913ms for pod "kube-proxy-q7ggz" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.806665    1828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.877961    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:09.878665    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:09.918115    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:09.992048    1828 pod_ready.go:93] pod "kube-scheduler-addons-291300" in "kube-system" namespace has status "Ready":"True"
	I0915 06:34:09.992163    1828 pod_ready.go:82] duration metric: took 185.4964ms for pod "kube-scheduler-addons-291300" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:09.992197    1828 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace to be "Ready" ...
	I0915 06:34:10.321188    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:10.371761    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:10.417144    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:10.821864    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:10.826653    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:10.919250    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:11.374352    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:11.379329    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:11.417433    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:11.823232    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:11.825248    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:11.917559    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:12.008463    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:12.321549    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:12.324708    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:12.417522    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:12.822002    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:12.827371    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:12.917237    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:13.321307    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:13.326936    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:13.418051    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:13.822113    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:13.826711    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:13.918788    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:14.321449    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:14.328063    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:14.418088    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:14.506743    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:14.820920    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:14.824431    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:14.919924    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:15.323106    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:15.323954    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:15.417960    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:15.820878    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:15.825900    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:15.918047    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:16.322423    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:16.323014    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:16.749902    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:17.106520    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:17.107971    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:17.110308    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:17.111043    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:17.346858    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:17.347187    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:17.543906    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:17.822456    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:17.829174    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:17.917578    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:18.321365    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:18.323312    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:18.417524    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:19.050407    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:19.050960    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:19.050960    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:19.359110    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:19.361555    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:19.362942    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:19.418654    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:19.821509    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:19.823848    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:19.917096    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:20.324506    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:20.324506    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:20.421826    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:20.821381    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:20.871399    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:20.922398    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:21.325113    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:21.326273    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:21.490839    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:21.584096    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:21.821121    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:21.822126    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:21.921133    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:22.322263    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:22.326267    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:22.419243    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:22.822024    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:22.823026    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:22.920729    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:23.328623    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:23.331275    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:23.418572    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:23.877597    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:23.881981    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:23.975917    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:24.010818    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:24.322046    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:24.325631    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:24.418859    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:24.821146    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:24.824893    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:24.920929    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:25.323229    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:25.323863    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:25.419280    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:25.822802    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:25.824813    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:25.920615    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:26.320879    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:26.323427    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:26.419583    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:26.509798    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:26.822728    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:26.823227    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:26.918887    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:27.323123    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:27.325708    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:27.427973    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:27.851450    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:27.851637    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:27.917949    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:28.320136    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:28.324007    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:28.418676    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:28.823638    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:28.825109    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:28.921683    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:29.011881    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:29.322816    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:29.322816    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:29.417825    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:29.821829    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:29.822818    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:29.918853    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:30.323877    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:30.323877    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:30.420845    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:30.822828    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:30.823838    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:30.924871    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:31.321846    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:31.322890    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:31.420877    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:31.506874    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:31.824852    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:31.824852    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:31.918925    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:32.323698    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:32.329680    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:32.418685    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:32.871798    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:32.872260    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:32.917934    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:33.324184    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:33.324184    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:33.418397    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:33.508307    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:33.823405    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:33.826657    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:33.918149    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:34.321916    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:34.325468    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:34.417673    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:34.822470    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:34.826603    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:34.918814    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:35.322703    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:35.325741    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:35.419168    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:35.819918    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:35.824905    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:35.918409    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:36.007081    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:36.322058    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:36.325672    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:36.422810    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:36.822641    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:36.825346    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:36.916245    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:37.323648    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:37.323935    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:37.417113    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:37.822257    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:37.824041    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:37.920902    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:38.007557    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:38.321103    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:38.323318    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:38.417661    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:38.822491    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:38.825132    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:38.918863    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:39.322865    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:39.325716    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:39.419089    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:39.821768    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:39.824764    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:39.921905    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:40.008790    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:40.323104    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:40.327985    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:40.418780    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:40.822351    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:40.825917    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:40.918578    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:41.322433    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:41.324812    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:41.418426    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:41.823266    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:41.826050    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:41.969548    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:42.010769    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:42.323230    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:42.325174    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:42.420048    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:42.822075    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:42.824940    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:42.918751    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:43.322002    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:43.322878    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:43.420375    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:43.822189    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:43.825869    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:43.917754    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:44.322661    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:44.326121    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:44.418287    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:44.507068    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:44.820939    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:44.825000    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:44.917357    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:45.322397    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:45.325843    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:45.418853    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:45.822347    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:45.826486    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:45.916611    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:46.325609    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:46.328158    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:46.419030    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:46.509786    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:46.821183    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:46.824901    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:46.917178    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:47.322430    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:47.325598    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:47.417826    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:47.821295    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:47.824458    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:47.917250    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:48.321263    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:48.326062    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:48.417922    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:48.821909    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:48.826190    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:48.921269    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:49.007043    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:49.323705    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:49.324402    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:49.418063    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:49.822327    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:49.825193    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:49.919090    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:50.322527    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:50.324173    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:50.417232    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:50.822338    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:50.825406    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:50.916729    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:51.322286    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:51.326519    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:51.425971    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:51.513003    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:51.822139    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:51.823967    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:51.919609    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:52.321310    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:52.325736    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:52.417441    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:52.822245    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:52.824543    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:52.917984    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:53.324791    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:53.326678    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:53.418381    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:53.823094    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:53.825454    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:53.919592    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:54.009331    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:54.322338    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:54.325705    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:54.418024    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:54.822296    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:54.825701    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:54.919518    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:55.322756    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:55.324766    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:55.449526    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:56.065217    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:56.066787    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:56.067097    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:56.073319    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:56.322241    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:56.323600    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:56.420599    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:56.821084    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:56.823117    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:56.918243    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:57.715656    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:57.716026    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:57.720987    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:57.994194    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:57.995669    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:57.995669    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:58.329626    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:58.332843    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:58.421289    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:58.508016    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:34:58.821621    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:58.867613    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:58.921608    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:59.322506    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:59.366733    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:59.468642    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:34:59.822100    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:34:59.826928    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:34:59.920130    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:00.322943    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:00.322943    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:00.418001    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:00.822758    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:00.824263    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:00.917469    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:01.011366    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:01.321108    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:01.326067    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:01.419268    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:01.827532    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:01.828909    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:01.920259    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:02.324623    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:02.327554    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:02.419428    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:02.825448    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:02.825448    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:02.920423    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:03.368453    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:03.369435    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:03.420038    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:03.506017    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:03.822943    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:03.826939    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:03.917929    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:04.323136    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:04.324722    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:04.418361    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:04.822555    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:04.825090    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:04.919199    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:05.322714    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:05.323411    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:05.420539    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:05.510196    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:05.823088    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:05.826155    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:05.920032    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:06.323124    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:06.323582    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:06.418870    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:06.822581    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:06.824038    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:06.917566    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:07.361228    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:07.361983    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:07.615878    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:07.625297    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:07.823808    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:07.824821    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:07.917942    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:08.337164    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:08.352536    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:08.437166    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:08.821076    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:08.823059    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:08.920060    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:09.321764    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:09.326610    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:09.419614    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:09.868643    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:09.868643    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:09.980504    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:10.006520    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:10.324056    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:10.325054    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:10.419054    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:10.822103    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:10.823071    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:10.919671    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:11.323378    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:11.324328    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:11.418332    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:11.823330    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:11.823330    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:11.919329    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:12.323724    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:12.323724    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:12.469511    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:12.507563    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:12.820722    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:12.823791    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:12.918735    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:13.324489    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:13.326519    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:13.419334    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:13.822656    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:13.864207    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:13.923824    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:14.323303    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:14.325088    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:14.420070    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:14.508434    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:14.820545    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:14.822541    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:14.920248    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:15.321591    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:15.326723    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:15.419109    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:15.822576    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:15.824233    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:15.919146    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:16.322420    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:16.325532    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:16.418869    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:16.822896    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:16.825971    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:16.918105    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:17.009543    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:17.322475    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:17.362379    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:17.418182    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:17.821965    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:17.824949    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:17.917908    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:18.323741    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:18.324581    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:18.428002    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:18.821428    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:18.822425    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:18.965104    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:19.322543    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:19.324586    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:19.419602    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:19.507114    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:19.822438    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:19.824118    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:19.918038    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:20.322638    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:20.323626    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:20.417705    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:20.822189    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:20.824188    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:20.918851    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:21.322187    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:21.324175    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:21.418187    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:21.824183    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:21.824183    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:21.967702    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:22.010026    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:22.324128    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:22.324614    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:22.467453    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:22.968902    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:22.969415    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:22.971997    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:23.367752    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:23.368952    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:23.473232    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:23.866640    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:23.866832    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:23.968991    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:24.323199    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:24.325220    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:24.473272    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:24.569663    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:24.822662    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:24.826260    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:24.920786    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:25.322934    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:25.326478    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:25.420775    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:25.878295    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:25.878846    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:26.071567    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:26.326325    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:26.326912    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:26.420280    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:26.865372    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:26.865372    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:26.965358    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:27.007351    1828 pod_ready.go:103] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"False"
	I0915 06:35:27.370006    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:27.379988    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:27.421056    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:27.822869    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:27.822869    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:28.084477    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:28.091261    1828 pod_ready.go:93] pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace has status "Ready":"True"
	I0915 06:35:28.091261    1828 pod_ready.go:82] duration metric: took 1m18.0984431s for pod "metrics-server-84c5f94fbc-fmjgd" in "kube-system" namespace to be "Ready" ...
	I0915 06:35:28.091261    1828 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-4b4bn" in "kube-system" namespace to be "Ready" ...
	I0915 06:35:28.104275    1828 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-4b4bn" in "kube-system" namespace has status "Ready":"True"
	I0915 06:35:28.104275    1828 pod_ready.go:82] duration metric: took 13.0136ms for pod "nvidia-device-plugin-daemonset-4b4bn" in "kube-system" namespace to be "Ready" ...
	I0915 06:35:28.104275    1828 pod_ready.go:39] duration metric: took 1m33.1185873s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:35:28.104275    1828 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:35:28.111265    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 06:35:28.210270    1828 logs.go:276] 1 containers: [51e7d21fa829]
	I0915 06:35:28.220270    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 06:35:28.301279    1828 logs.go:276] 1 containers: [a5b41ce46b6f]
	I0915 06:35:28.312311    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 06:35:28.322276    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:28.396263    1828 logs.go:276] 1 containers: [50b96eeb3dbd]
	I0915 06:35:28.407276    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 06:35:28.424286    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:28.425277    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:28.491307    1828 logs.go:276] 1 containers: [c2c691d531f2]
	I0915 06:35:28.501263    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 06:35:28.568291    1828 logs.go:276] 1 containers: [1cd5f0902f29]
	I0915 06:35:28.578271    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 06:35:28.688273    1828 logs.go:276] 1 containers: [e39d00629d74]
	I0915 06:35:28.699301    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 06:35:28.765286    1828 logs.go:276] 0 containers: []
	W0915 06:35:28.765286    1828 logs.go:278] No container was found matching "kindnet"
	I0915 06:35:28.765286    1828 logs.go:123] Gathering logs for kubelet ...
	I0915 06:35:28.766266    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:35:28.823267    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:28.828270    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:28.918282    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:28.921301    1828 logs.go:123] Gathering logs for dmesg ...
	I0915 06:35:28.921301    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:35:28.964272    1828 logs.go:123] Gathering logs for coredns [50b96eeb3dbd] ...
	I0915 06:35:28.964272    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b96eeb3dbd"
	I0915 06:35:29.028325    1828 logs.go:123] Gathering logs for kube-controller-manager [e39d00629d74] ...
	I0915 06:35:29.028325    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39d00629d74"
	I0915 06:35:29.148279    1828 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:35:29.148279    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:35:29.364299    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:29.365283    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:29.468285    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:29.600921    1828 logs.go:123] Gathering logs for kube-apiserver [51e7d21fa829] ...
	I0915 06:35:29.600921    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e7d21fa829"
	I0915 06:35:29.733929    1828 logs.go:123] Gathering logs for etcd [a5b41ce46b6f] ...
	I0915 06:35:29.733929    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b41ce46b6f"
	I0915 06:35:29.865911    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:29.866929    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:29.969928    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:29.993909    1828 logs.go:123] Gathering logs for kube-scheduler [c2c691d531f2] ...
	I0915 06:35:29.993909    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c691d531f2"
	I0915 06:35:30.110046    1828 logs.go:123] Gathering logs for kube-proxy [1cd5f0902f29] ...
	I0915 06:35:30.110046    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd5f0902f29"
	I0915 06:35:30.277731    1828 logs.go:123] Gathering logs for Docker ...
	I0915 06:35:30.277731    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 06:35:30.365673    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:30.366680    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:30.395676    1828 logs.go:123] Gathering logs for container status ...
	I0915 06:35:30.395676    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:35:30.468348    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:30.821640    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:30.825537    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:30.918672    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:31.322394    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:31.325971    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:31.419606    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:31.822909    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:31.825638    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:31.919881    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:32.322091    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:32.325062    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:32.418131    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:32.822643    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:32.827717    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:32.918331    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:33.184653    1828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:35:33.263805    1828 api_server.go:72] duration metric: took 2m9.756103s to wait for apiserver process to appear ...
	I0915 06:35:33.263923    1828 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:35:33.272856    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 06:35:33.321991    1828 logs.go:276] 1 containers: [51e7d21fa829]
	I0915 06:35:33.324768    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:33.327161    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:33.331816    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 06:35:33.398155    1828 logs.go:276] 1 containers: [a5b41ce46b6f]
	I0915 06:35:33.408198    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 06:35:33.418787    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:33.485526    1828 logs.go:276] 1 containers: [50b96eeb3dbd]
	I0915 06:35:33.494411    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 06:35:33.563223    1828 logs.go:276] 1 containers: [c2c691d531f2]
	I0915 06:35:33.572774    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 06:35:33.617778    1828 logs.go:276] 1 containers: [1cd5f0902f29]
	I0915 06:35:33.627227    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 06:35:33.695069    1828 logs.go:276] 1 containers: [e39d00629d74]
	I0915 06:35:33.704405    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 06:35:33.770066    1828 logs.go:276] 0 containers: []
	W0915 06:35:33.770066    1828 logs.go:278] No container was found matching "kindnet"
	I0915 06:35:33.770162    1828 logs.go:123] Gathering logs for coredns [50b96eeb3dbd] ...
	I0915 06:35:33.770187    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b96eeb3dbd"
	I0915 06:35:33.823056    1828 logs.go:123] Gathering logs for kube-proxy [1cd5f0902f29] ...
	I0915 06:35:33.823056    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd5f0902f29"
	I0915 06:35:33.824094    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:33.825896    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:33.900051    1828 logs.go:123] Gathering logs for Docker ...
	I0915 06:35:33.900183    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 06:35:33.920012    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:33.987293    1828 logs.go:123] Gathering logs for dmesg ...
	I0915 06:35:33.987293    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:35:34.020585    1828 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:35:34.020585    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:35:34.308079    1828 logs.go:123] Gathering logs for etcd [a5b41ce46b6f] ...
	I0915 06:35:34.309158    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b41ce46b6f"
	I0915 06:35:34.362364    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:34.369012    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:34.472912    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:34.577616    1828 logs.go:123] Gathering logs for kube-controller-manager [e39d00629d74] ...
	I0915 06:35:34.578648    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39d00629d74"
	I0915 06:35:34.720566    1828 logs.go:123] Gathering logs for container status ...
	I0915 06:35:34.720566    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:35:34.823731    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:34.826063    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:34.886891    1828 logs.go:123] Gathering logs for kubelet ...
	I0915 06:35:34.887433    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:35:34.919770    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:35.061237    1828 logs.go:123] Gathering logs for kube-apiserver [51e7d21fa829] ...
	I0915 06:35:35.061237    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e7d21fa829"
	I0915 06:35:35.149424    1828 logs.go:123] Gathering logs for kube-scheduler [c2c691d531f2] ...
	I0915 06:35:35.149424    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c691d531f2"
	I0915 06:35:35.321422    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:35.325490    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:35.421007    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:35.822390    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:35.825827    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:35.918275    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:36.322318    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:36.326677    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:36.418604    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:36.822133    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:36.823412    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:36.917358    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:37.323983    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:37.325704    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:37.419207    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:37.713269    1828 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:64885/healthz ...
	I0915 06:35:37.728527    1828 api_server.go:279] https://127.0.0.1:64885/healthz returned 200:
	ok
	I0915 06:35:37.732453    1828 api_server.go:141] control plane version: v1.31.1
	I0915 06:35:37.732453    1828 api_server.go:131] duration metric: took 4.468495s to wait for apiserver health ...
	I0915 06:35:37.732453    1828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:35:37.741953    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0915 06:35:37.812027    1828 logs.go:276] 1 containers: [51e7d21fa829]
	I0915 06:35:37.822414    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0915 06:35:37.864812    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:37.865633    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:37.962141    1828 logs.go:276] 1 containers: [a5b41ce46b6f]
	I0915 06:35:37.969326    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:37.975124    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0915 06:35:38.078178    1828 logs.go:276] 1 containers: [50b96eeb3dbd]
	I0915 06:35:38.089475    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0915 06:35:38.201276    1828 logs.go:276] 1 containers: [c2c691d531f2]
	I0915 06:35:38.212225    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0915 06:35:38.300516    1828 logs.go:276] 1 containers: [1cd5f0902f29]
	I0915 06:35:38.309112    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0915 06:35:38.363715    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:38.367004    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:38.402369    1828 logs.go:276] 1 containers: [e39d00629d74]
	I0915 06:35:38.413141    1828 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0915 06:35:38.467302    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:38.494635    1828 logs.go:276] 0 containers: []
	W0915 06:35:38.494844    1828 logs.go:278] No container was found matching "kindnet"
	I0915 06:35:38.494844    1828 logs.go:123] Gathering logs for kubelet ...
	I0915 06:35:38.494844    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0915 06:35:38.759939    1828 logs.go:123] Gathering logs for dmesg ...
	I0915 06:35:38.759939    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0915 06:35:38.796552    1828 logs.go:123] Gathering logs for etcd [a5b41ce46b6f] ...
	I0915 06:35:38.796608    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5b41ce46b6f"
	I0915 06:35:38.822755    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:38.863802    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:38.970978    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:39.061672    1828 logs.go:123] Gathering logs for Docker ...
	I0915 06:35:39.061672    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0915 06:35:39.129333    1828 logs.go:123] Gathering logs for container status ...
	I0915 06:35:39.129333    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0915 06:35:39.365350    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:39.366494    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:39.465056    1828 logs.go:123] Gathering logs for describe nodes ...
	I0915 06:35:39.465228    1828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0915 06:35:39.468124    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:39.865962    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:39.866729    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:39.876808    1828 logs.go:123] Gathering logs for kube-apiserver [51e7d21fa829] ...
	I0915 06:35:39.876808    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 51e7d21fa829"
	I0915 06:35:39.966552    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:40.014884    1828 logs.go:123] Gathering logs for coredns [50b96eeb3dbd] ...
	I0915 06:35:40.014884    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 50b96eeb3dbd"
	I0915 06:35:40.173907    1828 logs.go:123] Gathering logs for kube-scheduler [c2c691d531f2] ...
	I0915 06:35:40.173907    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2c691d531f2"
	I0915 06:35:40.300317    1828 logs.go:123] Gathering logs for kube-proxy [1cd5f0902f29] ...
	I0915 06:35:40.300317    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1cd5f0902f29"
	I0915 06:35:40.364628    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:40.365309    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:40.399771    1828 logs.go:123] Gathering logs for kube-controller-manager [e39d00629d74] ...
	I0915 06:35:40.399771    1828 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e39d00629d74"
	I0915 06:35:40.476970    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:40.824867    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:40.824867    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:40.920528    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:41.322073    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:41.323575    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:41.418331    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:41.824307    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:41.826268    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:41.919220    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:42.320876    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:42.322865    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:42.418208    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:42.825175    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:42.828152    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:42.919097    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:43.131333    1828 system_pods.go:59] 18 kube-system pods found
	I0915 06:35:43.131333    1828 system_pods.go:61] "coredns-7c65d6cfc9-b4jc9" [93c2ef9d-358b-43b2-883d-ac5ec27b12fd] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "csi-hostpath-attacher-0" [d31e9e25-e90d-4915-9734-86badead8020] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:35:43.131333    1828 system_pods.go:61] "csi-hostpath-resizer-0" [8634ad1a-508b-4129-b087-3f4b4bece6dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:35:43.131333    1828 system_pods.go:61] "csi-hostpathplugin-xst75" [9d655397-3e2b-467b-9cb0-308c3fb22fcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:35:43.131333    1828 system_pods.go:61] "etcd-addons-291300" [098776df-5cfb-419e-8353-efef79081606] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "kube-apiserver-addons-291300" [3ba74fa8-f5d9-43fd-b5da-8f99f8126f95] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "kube-controller-manager-addons-291300" [9aeb7668-fa82-4e86-8603-af01ff78c016] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "kube-ingress-dns-minikube" [98ce82d0-7770-4c99-9797-9cdd41d3104c] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "kube-proxy-q7ggz" [edfb3c1d-4a7f-4e29-b0ff-609c6e04d4ba] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "kube-scheduler-addons-291300" [0e6f5481-f013-4123-8ffe-0101fb927e23] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "metrics-server-84c5f94fbc-fmjgd" [247331d3-cf85-4e2f-8739-94d511c0a400] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "nvidia-device-plugin-daemonset-4b4bn" [fbde28c1-eb98-436e-91b6-a94e359bc1a4] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "registry-66c9cd494c-wm6fs" [a7194b70-c1f2-4046-a1e4-d8de0f1a5fff] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "registry-proxy-rdwl4" [19b0cfa9-36e8-495f-b934-367e8398b5da] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:35:43.131333    1828 system_pods.go:61] "snapshot-controller-56fcc65765-92hfw" [de678f7e-df62-42d6-93cf-602a65c22ff9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:35:43.131333    1828 system_pods.go:61] "snapshot-controller-56fcc65765-qvtb6" [60215267-c80f-470a-8451-d81555a7a6d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:35:43.131333    1828 system_pods.go:61] "storage-provisioner" [7f439203-a3ba-4201-a570-7ad5faff8408] Running
	I0915 06:35:43.131333    1828 system_pods.go:61] "tiller-deploy-b48cc5f79-cpg74" [6745c867-b5d7-4421-982e-0bec5d59ee9b] Running
	I0915 06:35:43.131333    1828 system_pods.go:74] duration metric: took 5.398838s to wait for pod list to return data ...
	I0915 06:35:43.131333    1828 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:35:43.137348    1828 default_sa.go:45] found service account: "default"
	I0915 06:35:43.137348    1828 default_sa.go:55] duration metric: took 6.0148ms for default service account to be created ...
	I0915 06:35:43.137348    1828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:35:43.150348    1828 system_pods.go:86] 18 kube-system pods found
	I0915 06:35:43.150348    1828 system_pods.go:89] "coredns-7c65d6cfc9-b4jc9" [93c2ef9d-358b-43b2-883d-ac5ec27b12fd] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "csi-hostpath-attacher-0" [d31e9e25-e90d-4915-9734-86badead8020] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:35:43.150348    1828 system_pods.go:89] "csi-hostpath-resizer-0" [8634ad1a-508b-4129-b087-3f4b4bece6dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:35:43.150348    1828 system_pods.go:89] "csi-hostpathplugin-xst75" [9d655397-3e2b-467b-9cb0-308c3fb22fcf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:35:43.150348    1828 system_pods.go:89] "etcd-addons-291300" [098776df-5cfb-419e-8353-efef79081606] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "kube-apiserver-addons-291300" [3ba74fa8-f5d9-43fd-b5da-8f99f8126f95] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "kube-controller-manager-addons-291300" [9aeb7668-fa82-4e86-8603-af01ff78c016] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "kube-ingress-dns-minikube" [98ce82d0-7770-4c99-9797-9cdd41d3104c] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "kube-proxy-q7ggz" [edfb3c1d-4a7f-4e29-b0ff-609c6e04d4ba] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "kube-scheduler-addons-291300" [0e6f5481-f013-4123-8ffe-0101fb927e23] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "metrics-server-84c5f94fbc-fmjgd" [247331d3-cf85-4e2f-8739-94d511c0a400] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4bn" [fbde28c1-eb98-436e-91b6-a94e359bc1a4] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "registry-66c9cd494c-wm6fs" [a7194b70-c1f2-4046-a1e4-d8de0f1a5fff] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "registry-proxy-rdwl4" [19b0cfa9-36e8-495f-b934-367e8398b5da] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0915 06:35:43.150348    1828 system_pods.go:89] "snapshot-controller-56fcc65765-92hfw" [de678f7e-df62-42d6-93cf-602a65c22ff9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:35:43.150348    1828 system_pods.go:89] "snapshot-controller-56fcc65765-qvtb6" [60215267-c80f-470a-8451-d81555a7a6d0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:35:43.150348    1828 system_pods.go:89] "storage-provisioner" [7f439203-a3ba-4201-a570-7ad5faff8408] Running
	I0915 06:35:43.150348    1828 system_pods.go:89] "tiller-deploy-b48cc5f79-cpg74" [6745c867-b5d7-4421-982e-0bec5d59ee9b] Running
	I0915 06:35:43.150348    1828 system_pods.go:126] duration metric: took 13.0003ms to wait for k8s-apps to be running ...
	I0915 06:35:43.150348    1828 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:35:43.161337    1828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:35:43.187386    1828 system_svc.go:56] duration metric: took 37.0372ms WaitForService to wait for kubelet
	I0915 06:35:43.187386    1828 kubeadm.go:582] duration metric: took 2m19.6796063s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:35:43.187386    1828 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:35:43.193398    1828 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0915 06:35:43.193398    1828 node_conditions.go:123] node cpu capacity is 16
	I0915 06:35:43.193398    1828 node_conditions.go:105] duration metric: took 6.0126ms to run NodePressure ...
	I0915 06:35:43.193398    1828 start.go:241] waiting for startup goroutines ...
	I0915 06:35:43.365022    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:43.365250    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:43.466908    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:43.821909    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:43.825079    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:43.919897    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:44.323668    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:44.325648    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:44.418660    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:45.258834    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:45.259752    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:45.261161    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:45.348275    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:45.350083    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:45.508433    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:45.822163    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:45.825683    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:45.918612    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:46.324308    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:46.326568    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:46.426181    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:46.822545    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:46.824521    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:46.923464    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:47.323420    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:35:47.323420    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:47.423426    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:47.820764    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:47.822761    1828 kapi.go:107] duration metric: took 1m53.0064398s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:35:47.918766    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:48.320047    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:48.468131    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:48.864138    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:48.967142    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:49.369425    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:49.417010    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:49.823657    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:49.920143    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:50.323413    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:50.419400    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:50.822085    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:50.918861    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:51.321592    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:51.437325    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:51.822781    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:51.918653    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:52.326965    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:52.422232    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:52.820810    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:52.918806    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:53.323638    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:53.419311    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:53.822363    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:53.918273    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:54.322033    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:54.418468    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:54.822463    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:54.918565    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:55.322631    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:55.420446    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:55.822980    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:55.919339    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:56.322972    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:56.417802    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:56.825394    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:56.917872    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:57.322278    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:57.422921    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:57.867707    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:57.917763    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:58.323449    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:58.418511    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:58.822646    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:58.920950    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:59.326643    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:59.418604    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:35:59.831079    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:35:59.947967    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:00.323836    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:00.466856    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:00.822537    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:00.918125    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:01.321042    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:01.419073    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:01.823468    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:01.918433    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:02.323545    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:02.420968    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:02.826757    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:02.926787    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:03.323285    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:03.419301    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:03.863656    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:03.966644    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:04.324342    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:04.418892    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:04.821402    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:04.919982    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:05.323089    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:05.418461    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:05.822861    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:05.918805    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:06.324575    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:06.418542    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:06.822721    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:06.920977    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:07.497968    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:07.497968    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:07.821193    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:07.925195    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:08.322295    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:08.421800    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:08.821463    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:08.923568    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:09.362483    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:09.464806    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:09.826301    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:09.958334    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:10.323320    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:10.418319    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:10.821308    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:10.960311    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:11.363512    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:11.416844    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:11.820960    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:11.919443    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:12.323114    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:12.417392    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:12.821646    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:12.918411    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:13.321852    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:13.418686    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:13.822444    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:13.918923    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:14.322467    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:14.418568    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:14.860658    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:14.918636    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:15.321508    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:15.419292    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:16.121411    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:16.121648    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:16.410921    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:16.420619    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:16.829811    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:16.967900    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:17.368791    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:17.481898    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:17.821553    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:17.926662    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:18.322016    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:18.420626    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:18.822605    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:18.920606    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:19.325291    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:19.418269    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:19.823310    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:19.963316    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:20.360053    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:20.470893    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:20.822301    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:20.917721    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:21.323008    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:21.418319    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:21.823067    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:21.918929    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:22.324265    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:22.473364    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:22.823327    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:22.922469    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:23.322687    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:23.421508    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:23.823537    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:23.919544    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:24.360543    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:24.423507    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:24.823772    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:24.966913    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:25.321857    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:25.418574    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:25.859556    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:25.920103    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:26.323620    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:26.604549    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:26.823079    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:26.922587    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:27.364825    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:27.467668    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:27.822581    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:27.957825    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:28.360495    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:28.463321    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:28.828997    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:28.922317    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:29.323005    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:29.420625    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:29.821961    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:29.920959    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:30.322001    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:30.462986    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:30.861379    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:30.962726    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:31.324147    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:31.418781    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:31.822286    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:31.918908    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:32.321182    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:32.420767    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:32.827429    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:32.917943    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:33.328933    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:33.427592    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:33.825569    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:33.918580    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:34.323117    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:34.422175    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:34.823485    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:34.919485    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:35.321177    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:35.421157    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:35.821771    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:35.918376    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:36.321046    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:36.420710    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:36.823280    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:36.960250    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:37.326682    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:37.609274    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:37.825597    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:37.929774    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:38.323006    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:38.418431    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:38.824212    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:38.920231    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:39.329212    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:39.460231    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:39.820819    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:39.918760    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:40.323203    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:40.457846    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:40.820326    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:40.930478    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:41.321680    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:41.418901    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:41.872406    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:41.918204    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:42.324888    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:42.418714    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:42.826225    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:42.919173    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:43.322589    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:43.420231    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:43.821613    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:43.918627    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:44.321404    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:44.419478    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:44.859826    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:44.957759    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:45.322980    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:45.418263    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:45.822641    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:45.959441    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:46.324177    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:46.422734    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:46.823591    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:47.025232    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:47.355796    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:47.463831    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:47.861776    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:47.967050    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:48.361196    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:48.419207    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:48.823717    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:48.919305    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:49.358857    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:49.460986    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:49.867132    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:49.963710    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:50.358254    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:50.460233    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:50.861118    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:50.960894    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:51.322516    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:51.418561    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:51.823490    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:51.918558    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:52.323224    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:52.424801    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:52.826332    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:52.931144    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:53.324504    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:53.419976    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:53.824579    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:53.919945    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:54.320303    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:54.421319    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:54.821420    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:54.920467    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:55.321259    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:55.423493    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:55.824108    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:55.922844    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:56.355713    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:56.418626    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:56.820749    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:56.918738    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:57.321471    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:57.421651    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:57.859126    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:57.920655    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:58.358775    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:58.460328    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:58.822444    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:58.921085    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:59.358955    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:59.463292    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:36:59.824465    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:59.920444    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:00.321815    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:00.422605    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:00.823731    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:00.919742    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:01.322551    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:01.419582    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:01.822566    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:01.919406    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:02.322075    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:02.418603    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:02.855278    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:02.956446    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:03.325329    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:03.457440    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:03.823012    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:04.000870    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:04.324135    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:04.423179    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:04.825333    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:04.919015    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:05.323325    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:05.419096    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:05.823604    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:05.924523    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:06.322901    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:06.419904    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:06.824514    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:06.919523    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:07.324260    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:07.418604    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:07.855328    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:07.918624    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:08.323362    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:08.454905    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:08.823335    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:09.110484    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:09.324231    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:09.477105    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:09.858079    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:09.967362    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:10.324414    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:10.418159    1828 kapi.go:107] duration metric: took 3m13.5075017s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:37:10.822741    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:11.323411    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:11.859641    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:12.322963    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:12.855601    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:13.321842    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:13.825467    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:14.322240    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:14.821131    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:15.322818    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:15.824918    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:16.322152    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:16.823489    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:17.325356    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:17.823938    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:18.323894    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:18.822408    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:19.326153    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:19.821983    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:20.321587    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:20.821533    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:21.322258    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:21.822073    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:22.321774    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:22.822374    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:23.322481    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:23.822711    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:24.323560    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:24.822483    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:25.323455    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:25.822279    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:26.322968    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:26.822086    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:27.323348    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:27.824347    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:28.322077    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:28.822713    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:29.321876    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:29.821452    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:30.322979    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:30.822391    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:31.322918    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:31.825061    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:32.321857    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:32.824340    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:33.322530    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:33.823776    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:34.322380    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:34.821824    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:35.322247    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:35.850500    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:36.323842    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:36.827491    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:37.323888    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:37.822623    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:38.324368    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:38.823685    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:39.324645    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:39.822782    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:40.322749    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:40.822447    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:41.323576    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:41.823247    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:42.322152    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:42.822321    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:43.323410    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:43.822340    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:44.323306    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:44.823391    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:45.324476    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:45.823519    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:46.322463    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:46.825469    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:47.326769    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:47.823246    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:48.323073    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:48.824026    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:49.327354    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:49.825018    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:50.322508    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:50.823209    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:51.322024    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:51.825142    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:52.322175    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:52.822612    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:53.323873    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:53.824326    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:54.322637    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:54.822291    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:55.322100    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:55.823893    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:56.323944    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:56.825358    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:57.323067    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:57.825538    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:58.323752    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:58.824175    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:59.329004    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:59.823026    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:00.323803    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:00.825659    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:01.323354    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:01.829060    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:02.322790    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:02.823897    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:03.324468    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:03.822728    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:04.323382    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:04.821748    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:05.322344    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:05.855138    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:06.356407    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:06.822935    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:07.323089    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:07.824507    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:08.351127    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:08.858747    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:09.352045    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:09.852068    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:10.348893    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:10.823211    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:11.324041    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:11.825060    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:12.349933    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:12.849940    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:13.348959    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:13.848201    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:14.354141    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:14.855466    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:15.325505    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:15.823397    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:16.321780    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:16.849962    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:17.323545    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:17.863730    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:18.348852    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:18.863534    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:19.323319    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:19.822502    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:20.347034    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:20.823167    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:21.322759    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:21.823499    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:22.348017    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:22.848259    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:23.348421    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:23.825826    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:24.350054    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:24.822088    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:25.323382    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:25.824260    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:26.344801    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:26.906106    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:27.323856    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:27.825716    1828 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:28.323743    1828 kapi.go:107] duration metric: took 4m33.5116279s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:39:29.914726    1828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:39:29.914726    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.405217    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.908523    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.410234    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.904310    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.405188    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.906893    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.406567    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:33.904577    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.405774    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:34.911892    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.408344    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:35.904764    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.407108    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:36.909428    1828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:37.405340    1828 kapi.go:107] duration metric: took 5m35.5082174s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:39:37.407812    1828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-291300 cluster.
	I0915 06:39:37.411592    1828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:39:37.414385    1828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:39:37.417102    1828 out.go:177] * Enabled addons: cloud-spanner, volcano, nvidia-device-plugin, ingress-dns, storage-provisioner, helm-tiller, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0915 06:39:37.421534    1828 addons.go:510] duration metric: took 6m13.9118167s for enable addons: enabled=[cloud-spanner volcano nvidia-device-plugin ingress-dns storage-provisioner helm-tiller metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0915 06:39:37.421565    1828 start.go:246] waiting for cluster config update ...
	I0915 06:39:37.421633    1828 start.go:255] writing updated cluster config ...
	I0915 06:39:37.437067    1828 ssh_runner.go:195] Run: rm -f paused
	I0915 06:39:37.700874    1828 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:39:37.705527    1828 out.go:177] * Done! kubectl is now configured to use "addons-291300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.174734701Z" level=info msg="ignoring event" container=a1666f1b9d1aab1473c2afe5c067e787908749c6709985b056b63255a7149342 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.372735573Z" level=info msg="ignoring event" container=f6bcec3f8dc328f978bad3e6301b5c5c70d2d6ee235982e10b190e32c5191e72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.462707911Z" level=info msg="ignoring event" container=30381bb3da10904d5a4bef325b480a8b888eb74929f26c421e8089ea6e2c67ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.664489563Z" level=info msg="ignoring event" container=5410e1617da03219a1e765be7af2f0b20d5edcfbabf3f4388596cd4127c04e31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.667747578Z" level=info msg="ignoring event" container=7ece36058e505e5bd2d370e46db7405b132371ed911810c163755e15172a1806 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.667794684Z" level=info msg="ignoring event" container=69188ca5151bb91d9ed20998529b1cf05e79668521cc977a2003c5d30f3cb598 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:36 addons-291300 dockerd[1368]: time="2024-09-15T06:49:36.762415013Z" level=info msg="ignoring event" container=4b620796e6d97e25854a64caf58ab13df79a7d3714103c58c78df86a0b605fe7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:37 addons-291300 dockerd[1368]: time="2024-09-15T06:49:37.478899599Z" level=info msg="ignoring event" container=f12c4de9b00d325e07aeb1b90b1880ec73be5aaabe2cc7352561cfca42c8fef7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:38 addons-291300 dockerd[1368]: time="2024-09-15T06:49:38.164815400Z" level=info msg="ignoring event" container=da2e8b752acaff27ab079341a17c7457c9c9c645946796779bb14780cae354d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:38 addons-291300 dockerd[1368]: time="2024-09-15T06:49:38.877877751Z" level=info msg="ignoring event" container=9798e058b6ca998ea04a9889958a275eb55f0db5f2a28f3fb0dc760c156e5053 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:38 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:38Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-resizer-0_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 15 06:49:38 addons-291300 dockerd[1368]: time="2024-09-15T06:49:38.979016309Z" level=info msg="ignoring event" container=e633efad1c5a7796e50abf164181586d1230543db9b1269cf8c56bafd2b0383a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:44 addons-291300 dockerd[1368]: time="2024-09-15T06:49:44.761039399Z" level=info msg="ignoring event" container=51571756289fdc73ddb146b5bc7f3c217cdc006ec82df49d1e98fe641369f448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:44 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:44Z" level=error msg="error getting RW layer size for container ID '5410e1617da03219a1e765be7af2f0b20d5edcfbabf3f4388596cd4127c04e31': Error response from daemon: No such container: 5410e1617da03219a1e765be7af2f0b20d5edcfbabf3f4388596cd4127c04e31"
	Sep 15 06:49:44 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5410e1617da03219a1e765be7af2f0b20d5edcfbabf3f4388596cd4127c04e31'"
	Sep 15 06:49:44 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:44Z" level=error msg="error getting RW layer size for container ID 'a1666f1b9d1aab1473c2afe5c067e787908749c6709985b056b63255a7149342': Error response from daemon: No such container: a1666f1b9d1aab1473c2afe5c067e787908749c6709985b056b63255a7149342"
	Sep 15 06:49:44 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:44Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'a1666f1b9d1aab1473c2afe5c067e787908749c6709985b056b63255a7149342'"
	Sep 15 06:49:44 addons-291300 dockerd[1368]: time="2024-09-15T06:49:44.872323847Z" level=info msg="ignoring event" container=4a84932a1df2e49769e69e6cee205f5ffaa1773e519126a1fd368fbdd9950a42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:45 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:45Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971: deeee09a8f9d: Extracting [=====>                                             ]  3.932MB/37.57MB"
	Sep 15 06:49:46 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:46Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-56fcc65765-92hfw_kube-system\": unexpected command output nsenter: cannot open /proc/4973/ns/net: No such file or directory\n with error: exit status 1"
	Sep 15 06:49:46 addons-291300 dockerd[1368]: time="2024-09-15T06:49:46.475769102Z" level=info msg="ignoring event" container=16ba6f92e1a5fc73d5b379d64326e10bcbf1f0fcb0019dbc0b5ae6a02df9bd3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:46 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:46Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-56fcc65765-qvtb6_kube-system\": unexpected command output nsenter: cannot open /proc/5111/ns/net: No such file or directory\n with error: exit status 1"
	Sep 15 06:49:46 addons-291300 dockerd[1368]: time="2024-09-15T06:49:46.675260564Z" level=info msg="ignoring event" container=193a18ede11b673e4c03222615426787086992ebb3ee7e44d4f431fd6b34fa86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 15 06:49:48 addons-291300 cri-dockerd[1640]: time="2024-09-15T06:49:48Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971"
	Sep 15 06:49:52 addons-291300 dockerd[1368]: time="2024-09-15T06:49:52.570255207Z" level=info msg="ignoring event" container=5f3398dfc725e21abc37141024f2be8842af8a010a34325cae61686eceb9332a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4edd3ad8f291a       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        6 seconds ago       Running             headlamp                  0                   e3540fae22952       headlamp-57fb76fcdb-g6chv
	dcc90ceac8090       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                47 seconds ago      Running             nginx                     0                   3e62fa3371e8c       nginx
	34d44bc5a5a72       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   7cb57df07922b       gcp-auth-89d5ffd79-9qhnw
	8adaabbff8440       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   e70f649ca63f2       ingress-nginx-controller-bc57996ff-xsprh
	6296b3e7582a6       ce263a8653f9c                                                                                                                13 minutes ago      Exited              patch                     1                   57a545fcd530b       ingress-nginx-admission-patch-5wvp8
	76b251b2ff835       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   13 minutes ago      Exited              create                    0                   70f4f34d603c3       ingress-nginx-admission-create-555tl
	1c490bafd573a       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       13 minutes ago      Running             local-path-provisioner    0                   5fc8b2ecb9d12       local-path-provisioner-86d989889c-7qvr8
	c76f018ac22ef       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              14 minutes ago      Running             registry-proxy            0                   8b5efbb2880d5       registry-proxy-rdwl4
	cc8c4a1f1e186       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             14 minutes ago      Running             registry                  0                   cd310ee769cac       registry-66c9cd494c-wm6fs
	370ec39223a6b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             14 minutes ago      Running             minikube-ingress-dns      0                   aea2ac1de38ce       kube-ingress-dns-minikube
	3054a493b983f       6e38f40d628db                                                                                                                16 minutes ago      Running             storage-provisioner       0                   db107492e4067       storage-provisioner
	50b96eeb3dbdb       c69fa2e9cbf5f                                                                                                                16 minutes ago      Running             coredns                   0                   bad2ed535cb59       coredns-7c65d6cfc9-b4jc9
	1cd5f0902f29f       60c005f310ff3                                                                                                                16 minutes ago      Running             kube-proxy                0                   2a5dd1abcc2a2       kube-proxy-q7ggz
	e39d00629d747       175ffd71cce3d                                                                                                                16 minutes ago      Running             kube-controller-manager   0                   4a08b297b830e       kube-controller-manager-addons-291300
	51e7d21fa8296       6bab7719df100                                                                                                                16 minutes ago      Running             kube-apiserver            0                   590513f51779c       kube-apiserver-addons-291300
	c2c691d531f29       9aa1fad941575                                                                                                                16 minutes ago      Running             kube-scheduler            0                   bdd717bbb5e8e       kube-scheduler-addons-291300
	a5b41ce46b6f2       2e96e5913fc06                                                                                                                16 minutes ago      Running             etcd                      0                   dbcf2d6388f79       etcd-addons-291300
	
	
	==> controller_ingress [8adaabbff844] <==
	I0915 06:38:28.742224       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0915 06:38:28.742794       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0915 06:38:28.742954       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:38:28.765406       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0915 06:38:28.765517       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-xsprh"
	I0915 06:38:28.772569       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-xsprh" node="addons-291300"
	I0915 06:38:28.796306       8 controller.go:213] "Backend successfully reloaded"
	I0915 06:38:28.796506       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0915 06:38:28.796705       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xsprh", UID:"87a7b783-5586-42cc-b381-a3d0807252b9", APIVersion:"v1", ResourceVersion:"807", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 06:48:57.672618       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 06:48:57.700199       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.028s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.028s testedConfigurationSize:18.1kB}
	I0915 06:48:57.700330       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0915 06:48:57.708986       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0915 06:48:57.709940       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"aa830745-a924-40db-bbea-11b5304fbeca", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2840", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0915 06:48:57.711915       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0915 06:48:57.712243       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:48:57.815860       8 controller.go:213] "Backend successfully reloaded"
	I0915 06:48:57.816502       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xsprh", UID:"87a7b783-5586-42cc-b381-a3d0807252b9", APIVersion:"v1", ResourceVersion:"807", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0915 06:49:01.067049       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0915 06:49:01.067510       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0915 06:49:01.364903       8 controller.go:213] "Backend successfully reloaded"
	I0915 06:49:01.365385       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-xsprh", UID:"87a7b783-5586-42cc-b381-a3d0807252b9", APIVersion:"v1", ResourceVersion:"807", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0915 06:49:28.703140       8 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0915 06:49:28.712087       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"aa830745-a924-40db-bbea-11b5304fbeca", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3043", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	10.244.0.1 - - [15/Sep/2024:06:49:15 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.003 [default-nginx-80] [] 10.244.0.33:80 615 0.004 200 ec6d747546a7c3c68a20556d63e87cca
	
	
	==> coredns [50b96eeb3dbd] <==
	[INFO] 127.0.0.1:38783 - 31378 "HINFO IN 4443784167856979337.3128483538043342397. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.073366063s
	[INFO] 10.244.0.9:39288 - 49107 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000545667s
	[INFO] 10.244.0.9:39288 - 34518 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000824801s
	[INFO] 10.244.0.9:54827 - 55369 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000391448s
	[INFO] 10.244.0.9:54827 - 58708 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000466557s
	[INFO] 10.244.0.9:36924 - 9884 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181822s
	[INFO] 10.244.0.9:36924 - 30874 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000171621s
	[INFO] 10.244.0.9:44129 - 13741 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000200925s
	[INFO] 10.244.0.9:44129 - 37040 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000267033s
	[INFO] 10.244.0.9:47016 - 57269 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00032734s
	[INFO] 10.244.0.9:47016 - 47282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000587072s
	[INFO] 10.244.0.9:38210 - 58439 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00024813s
	[INFO] 10.244.0.9:38210 - 34680 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000595073s
	[INFO] 10.244.0.9:59512 - 38780 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000192124s
	[INFO] 10.244.0.9:59512 - 8048 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000417652s
	[INFO] 10.244.0.9:42574 - 4238 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000155319s
	[INFO] 10.244.0.9:42574 - 31372 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000218427s
	[INFO] 10.244.0.26:59049 - 15980 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000520471s
	[INFO] 10.244.0.26:55587 - 44288 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00021533s
	[INFO] 10.244.0.26:38076 - 42242 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194227s
	[INFO] 10.244.0.26:42753 - 51665 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000423658s
	[INFO] 10.244.0.26:46582 - 372 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164422s
	[INFO] 10.244.0.26:47717 - 64408 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000341547s
	[INFO] 10.244.0.26:47722 - 47277 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.010885203s
	[INFO] 10.244.0.26:42930 - 5546 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.011340265s
	
	
	==> describe nodes <==
	Name:               addons-291300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-291300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-291300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_33_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-291300
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:33:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-291300
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:49:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:49:50 +0000   Sun, 15 Sep 2024 06:33:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:49:50 +0000   Sun, 15 Sep 2024 06:33:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:49:50 +0000   Sun, 15 Sep 2024 06:33:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:49:50 +0000   Sun, 15 Sep 2024 06:33:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-291300
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 439c4056b904457cb4212f25eac136dc
	  System UUID:                439c4056b904457cb4212f25eac136dc
	  Boot ID:                    c1102496-7d49-4e83-b615-37466f69e894
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m22s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  gcp-auth                    gcp-auth-89d5ffd79-9qhnw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  headlamp                    headlamp-57fb76fcdb-g6chv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-xsprh    100m (0%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-b4jc9                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-291300                          100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-291300                250m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-291300       200m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-q7ggz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-291300                100m (0%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-66c9cd494c-wm6fs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-proxy-rdwl4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  local-path-storage          local-path-provisioner-86d989889c-7qvr8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age                From             Message
	  ----     ------                             ----               ----             -------
	  Normal   Starting                           16m                kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m                kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory            16m (x7 over 16m)  kubelet          Node addons-291300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m (x7 over 16m)  kubelet          Node addons-291300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m (x7 over 16m)  kubelet          Node addons-291300 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m                kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            16m                kubelet          Node addons-291300 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m                kubelet          Node addons-291300 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m                kubelet          Node addons-291300 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     16m                node-controller  Node addons-291300 event: Registered Node addons-291300 in Controller
	
	
	==> dmesg <==
	[  +0.001666] FS-Cache: N-cookie d=000000000e7bf600{9P.session} n=0000000017578875
	[  +0.002114] FS-Cache: N-key=[10] '34323934393337343737'
	[  +0.823202] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000005]  failed 2
	[  +0.024004] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.522980] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +1.080299] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002716] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002659] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.004466] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000004]  failed 2
	[  +0.017869] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001888] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003623] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002394] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.080494] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.099908] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.921474] netlink: 'init': attribute type 4 has an invalid length.
	[Sep15 05:15] tmpfs: Unknown parameter 'noswap'
	[  +9.562128] tmpfs: Unknown parameter 'noswap'
	[Sep15 06:33] tmpfs: Unknown parameter 'noswap'
	[  +9.861380] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [a5b41ce46b6f] <==
	{"level":"info","ts":"2024-09-15T06:49:36.763621Z","caller":"traceutil/trace.go:171","msg":"trace[1960702832] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:3185; }","duration":"100.963936ms","start":"2024-09-15T06:49:36.662575Z","end":"2024-09-15T06:49:36.763539Z","steps":["trace[1960702832] 'agreement among raft nodes before linearized reading'  (duration: 100.890226ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:36.763514Z","caller":"traceutil/trace.go:171","msg":"trace[1832261550] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:3185; }","duration":"191.84679ms","start":"2024-09-15T06:49:36.571654Z","end":"2024-09-15T06:49:36.763501Z","steps":["trace[1832261550] 'agreement among raft nodes before linearized reading'  (duration: 191.77128ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:49:36.975310Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.694554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-attacher-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:49:36.975387Z","caller":"traceutil/trace.go:171","msg":"trace[1381914904] range","detail":"{range_begin:/registry/roles/kube-system/external-attacher-cfg; range_end:; response_count:0; response_revision:3185; }","duration":"113.805668ms","start":"2024-09-15T06:49:36.861564Z","end":"2024-09-15T06:49:36.975370Z","steps":["trace[1381914904] 'range keys from in-memory index tree'  (duration: 113.641547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:49:36.975472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.998493ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-15T06:49:36.975518Z","caller":"traceutil/trace.go:171","msg":"trace[756521589] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:3185; }","duration":"114.047399ms","start":"2024-09-15T06:49:36.861457Z","end":"2024-09-15T06:49:36.975504Z","steps":["trace[756521589] 'range keys from in-memory index tree'  (duration: 113.81717ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:36.975863Z","caller":"traceutil/trace.go:171","msg":"trace[748619726] transaction","detail":"{read_only:false; response_revision:3186; number_of_response:1; }","duration":"113.97629ms","start":"2024-09-15T06:49:36.861867Z","end":"2024-09-15T06:49:36.975843Z","steps":["trace[748619726] 'process raft request'  (duration: 27.747728ms)","trace[748619726] 'compare'  (duration: 85.546675ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:49:37.193934Z","caller":"traceutil/trace.go:171","msg":"trace[662901344] linearizableReadLoop","detail":"{readStateIndex:3423; appliedIndex:3422; }","duration":"126.854427ms","start":"2024-09-15T06:49:37.067064Z","end":"2024-09-15T06:49:37.193918Z","steps":["trace[662901344] 'read index received'  (duration: 8.478778ms)","trace[662901344] 'applied index is now lower than readState.Index'  (duration: 118.374949ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-15T06:49:37.194181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.100859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:49:37.194213Z","caller":"traceutil/trace.go:171","msg":"trace[2097012839] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa; range_end:; response_count:0; response_revision:3189; }","duration":"127.142964ms","start":"2024-09-15T06:49:37.067060Z","end":"2024-09-15T06:49:37.194203Z","steps":["trace[2097012839] 'agreement among raft nodes before linearized reading'  (duration: 127.053353ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:37.194646Z","caller":"traceutil/trace.go:171","msg":"trace[1730964624] transaction","detail":"{read_only:false; response_revision:3188; number_of_response:1; }","duration":"128.986598ms","start":"2024-09-15T06:49:37.065644Z","end":"2024-09-15T06:49:37.194631Z","steps":["trace[1730964624] 'process raft request'  (duration: 42.639721ms)","trace[1730964624] 'compare'  (duration: 85.510071ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:49:37.194925Z","caller":"traceutil/trace.go:171","msg":"trace[1775588216] transaction","detail":"{read_only:false; response_revision:3189; number_of_response:1; }","duration":"124.526331ms","start":"2024-09-15T06:49:37.070385Z","end":"2024-09-15T06:49:37.194911Z","steps":["trace[1775588216] 'process raft request'  (duration: 123.4939ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:49:37.661792Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.125629ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/external-health-monitor-controller-runner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:49:37.662013Z","caller":"traceutil/trace.go:171","msg":"trace[1198937717] range","detail":"{range_begin:/registry/clusterroles/external-health-monitor-controller-runner; range_end:; response_count:0; response_revision:3191; }","duration":"100.392263ms","start":"2024-09-15T06:49:37.561600Z","end":"2024-09-15T06:49:37.661992Z","steps":["trace[1198937717] 'range keys from in-memory index tree'  (duration: 100.04922ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:45.516896Z","caller":"traceutil/trace.go:171","msg":"trace[857822619] linearizableReadLoop","detail":"{readStateIndex:3470; appliedIndex:3469; }","duration":"252.288675ms","start":"2024-09-15T06:49:45.264564Z","end":"2024-09-15T06:49:45.516853Z","steps":["trace[857822619] 'read index received'  (duration: 251.62419ms)","trace[857822619] 'applied index is now lower than readState.Index'  (duration: 663.584µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:49:45.517239Z","caller":"traceutil/trace.go:171","msg":"trace[1525633551] transaction","detail":"{read_only:false; response_revision:3232; number_of_response:1; }","duration":"331.163703ms","start":"2024-09-15T06:49:45.186045Z","end":"2024-09-15T06:49:45.517209Z","steps":["trace[1525633551] 'process raft request'  (duration: 330.291792ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:49:45.517541Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-15T06:49:45.186021Z","time spent":"331.262915ms","remote":"127.0.0.1:49872","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:3192 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:425 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-09-15T06:49:45.517591Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.018268ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-15T06:49:45.517638Z","caller":"traceutil/trace.go:171","msg":"trace[532708898] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:3232; }","duration":"253.080975ms","start":"2024-09-15T06:49:45.264541Z","end":"2024-09-15T06:49:45.517622Z","steps":["trace[532708898] 'agreement among raft nodes before linearized reading'  (duration: 252.872149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:49:45.794148Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.154744ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-15T06:49:45.794279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.06419ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:49:45.794308Z","caller":"traceutil/trace.go:171","msg":"trace[990906298] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:3233; }","duration":"132.099794ms","start":"2024-09-15T06:49:45.662201Z","end":"2024-09-15T06:49:45.794301Z","steps":["trace[990906298] 'agreement among raft nodes before linearized reading'  (duration: 132.018284ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:45.794314Z","caller":"traceutil/trace.go:171","msg":"trace[2096336836] transaction","detail":"{read_only:false; response_revision:3233; number_of_response:1; }","duration":"228.508952ms","start":"2024-09-15T06:49:45.565797Z","end":"2024-09-15T06:49:45.794306Z","steps":["trace[2096336836] 'process raft request'  (duration: 192.833416ms)","trace[2096336836] 'compare'  (duration: 35.193674ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:49:45.794281Z","caller":"traceutil/trace.go:171","msg":"trace[1290925234] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:3232; }","duration":"152.474485ms","start":"2024-09-15T06:49:45.641793Z","end":"2024-09-15T06:49:45.794268Z","steps":["trace[1290925234] 'range keys from in-memory index tree'  (duration: 152.140943ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:49:45.794194Z","caller":"traceutil/trace.go:171","msg":"trace[321192068] linearizableReadLoop","detail":"{readStateIndex:3471; appliedIndex:3470; }","duration":"131.975279ms","start":"2024-09-15T06:49:45.662205Z","end":"2024-09-15T06:49:45.794180Z","steps":["trace[321192068] 'read index received'  (duration: 96.520071ms)","trace[321192068] 'applied index is now lower than readState.Index'  (duration: 35.454608ms)"],"step_count":2}
	
	
	==> gcp-auth [34d44bc5a5a7] <==
	2024/09/15 06:40:33 Ready to write response ...
	2024/09/15 06:40:34 Ready to marshal response ...
	2024/09/15 06:40:34 Ready to write response ...
	2024/09/15 06:48:42 Ready to marshal response ...
	2024/09/15 06:48:42 Ready to write response ...
	2024/09/15 06:48:42 Ready to marshal response ...
	2024/09/15 06:48:42 Ready to write response ...
	2024/09/15 06:48:52 Ready to marshal response ...
	2024/09/15 06:48:52 Ready to write response ...
	2024/09/15 06:48:56 Ready to marshal response ...
	2024/09/15 06:48:56 Ready to write response ...
	2024/09/15 06:48:58 Ready to marshal response ...
	2024/09/15 06:48:58 Ready to write response ...
	2024/09/15 06:49:00 Ready to marshal response ...
	2024/09/15 06:49:00 Ready to write response ...
	2024/09/15 06:49:20 Ready to marshal response ...
	2024/09/15 06:49:20 Ready to write response ...
	2024/09/15 06:49:21 Ready to marshal response ...
	2024/09/15 06:49:21 Ready to write response ...
	2024/09/15 06:49:33 Ready to marshal response ...
	2024/09/15 06:49:33 Ready to write response ...
	2024/09/15 06:49:33 Ready to marshal response ...
	2024/09/15 06:49:33 Ready to write response ...
	2024/09/15 06:49:33 Ready to marshal response ...
	2024/09/15 06:49:33 Ready to write response ...
	
	
	==> kernel <==
	 06:49:55 up  1:52,  0 users,  load average: 2.84, 1.32, 1.14
	Linux addons-291300 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [51e7d21fa829] <==
	I0915 06:40:26.027262       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0915 06:40:26.226172       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0915 06:40:27.037248       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0915 06:40:27.235700       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0915 06:48:55.633400       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0915 06:48:56.689994       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0915 06:48:57.701602       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0915 06:48:58.791783       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.18.184"}
	I0915 06:49:15.001633       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0915 06:49:28.075152       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.49.2:8443->10.244.0.36:48810: read: connection reset by peer
	I0915 06:49:28.927289       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0915 06:49:33.326445       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.95.26"}
	I0915 06:49:43.926259       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:49:43.926393       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:49:43.965039       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:49:43.965148       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:49:44.010565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:49:44.010740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:49:44.087216       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:49:44.087483       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0915 06:49:44.266158       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0915 06:49:44.266293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0915 06:49:45.087972       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0915 06:49:45.267562       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0915 06:49:45.286969       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [e39d00629d74] <==
	E0915 06:49:46.214565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:49:46.763276       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:46.763347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:49:46.794666       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:46.794822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:49:48.537161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:48.537283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:49:48.828256       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="73.51µs"
	I0915 06:49:48.880296       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="29.634041ms"
	I0915 06:49:48.880552       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="40.205µs"
	W0915 06:49:49.114337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:49.114446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:49:49.264500       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:49.264611       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:49:50.236867       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-291300"
	W0915 06:49:53.773083       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:53.773227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0915 06:49:54.090931       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:54.091058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0915 06:49:54.208825       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0915 06:49:54.208970       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:49:54.817637       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0915 06:49:54.817754       1 shared_informer.go:320] Caches are synced for garbage collector
	W0915 06:49:55.085738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0915 06:49:55.085886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1cd5f0902f29] <==
	E0915 06:33:37.871953       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0915 06:33:37.971064       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0915 06:33:38.086843       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:33:38.878002       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:33:38.878267       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:33:39.576743       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:33:39.577025       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:33:39.671743       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0915 06:33:39.696823       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0915 06:33:39.771525       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0915 06:33:39.771807       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:33:39.771852       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:33:39.778583       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:33:39.778689       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:33:39.778874       1 config.go:199] "Starting service config controller"
	I0915 06:33:39.778888       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:33:39.778934       1 config.go:328] "Starting node config controller"
	I0915 06:33:39.778942       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:33:39.980095       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:33:39.980412       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:33:39.980461       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [c2c691d531f2] <==
	W0915 06:33:15.810434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:33:15.810538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:15.828705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:33:15.828805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:15.848899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:15.848999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:15.896624       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:33:15.896778       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:33:15.909609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:33:15.909715       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:15.927357       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:33:15.927547       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.074421       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:33:16.074573       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.095259       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:16.095400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.213312       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:33:16.213629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.355639       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:33:16.355745       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.359204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:33:16.359264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:33:16.461900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0915 06:33:16.462003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0915 06:33:18.591909       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:49:41 addons-291300 kubelet[2557]: I0915 06:49:41.303665    2557 scope.go:117] "RemoveContainer" containerID="4b620796e6d97e25854a64caf58ab13df79a7d3714103c58c78df86a0b605fe7"
	Sep 15 06:49:41 addons-291300 kubelet[2557]: I0915 06:49:41.388563    2557 scope.go:117] "RemoveContainer" containerID="5410e1617da03219a1e765be7af2f0b20d5edcfbabf3f4388596cd4127c04e31"
	Sep 15 06:49:42 addons-291300 kubelet[2557]: E0915 06:49:42.487163    2557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="65e477a8-d41f-46ba-8a01-5de7bde445a8"
	Sep 15 06:49:42 addons-291300 kubelet[2557]: I0915 06:49:42.500907    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8634ad1a-508b-4129-b087-3f4b4bece6dc" path="/var/lib/kubelet/pods/8634ad1a-508b-4129-b087-3f4b4bece6dc/volumes"
	Sep 15 06:49:42 addons-291300 kubelet[2557]: I0915 06:49:42.501597    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d655397-3e2b-467b-9cb0-308c3fb22fcf" path="/var/lib/kubelet/pods/9d655397-3e2b-467b-9cb0-308c3fb22fcf/volumes"
	Sep 15 06:49:42 addons-291300 kubelet[2557]: I0915 06:49:42.503049    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d31e9e25-e90d-4915-9734-86badead8020" path="/var/lib/kubelet/pods/d31e9e25-e90d-4915-9734-86badead8020/volumes"
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.387694    2557 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn4cm\" (UniqueName: \"kubernetes.io/projected/60215267-c80f-470a-8451-d81555a7a6d0-kube-api-access-hn4cm\") pod \"60215267-c80f-470a-8451-d81555a7a6d0\" (UID: \"60215267-c80f-470a-8451-d81555a7a6d0\") "
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.387938    2557 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lql6\" (UniqueName: \"kubernetes.io/projected/de678f7e-df62-42d6-93cf-602a65c22ff9-kube-api-access-4lql6\") pod \"de678f7e-df62-42d6-93cf-602a65c22ff9\" (UID: \"de678f7e-df62-42d6-93cf-602a65c22ff9\") "
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.391401    2557 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60215267-c80f-470a-8451-d81555a7a6d0-kube-api-access-hn4cm" (OuterVolumeSpecName: "kube-api-access-hn4cm") pod "60215267-c80f-470a-8451-d81555a7a6d0" (UID: "60215267-c80f-470a-8451-d81555a7a6d0"). InnerVolumeSpecName "kube-api-access-hn4cm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.391964    2557 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de678f7e-df62-42d6-93cf-602a65c22ff9-kube-api-access-4lql6" (OuterVolumeSpecName: "kube-api-access-4lql6") pod "de678f7e-df62-42d6-93cf-602a65c22ff9" (UID: "de678f7e-df62-42d6-93cf-602a65c22ff9"). InnerVolumeSpecName "kube-api-access-4lql6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.488509    2557 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4lql6\" (UniqueName: \"kubernetes.io/projected/de678f7e-df62-42d6-93cf-602a65c22ff9-kube-api-access-4lql6\") on node \"addons-291300\" DevicePath \"\""
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.488628    2557 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hn4cm\" (UniqueName: \"kubernetes.io/projected/60215267-c80f-470a-8451-d81555a7a6d0-kube-api-access-hn4cm\") on node \"addons-291300\" DevicePath \"\""
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.704368    2557 scope.go:117] "RemoveContainer" containerID="51571756289fdc73ddb146b5bc7f3c217cdc006ec82df49d1e98fe641369f448"
	Sep 15 06:49:47 addons-291300 kubelet[2557]: I0915 06:49:47.861006    2557 scope.go:117] "RemoveContainer" containerID="4a84932a1df2e49769e69e6cee205f5ffaa1773e519126a1fd368fbdd9950a42"
	Sep 15 06:49:48 addons-291300 kubelet[2557]: I0915 06:49:48.495983    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60215267-c80f-470a-8451-d81555a7a6d0" path="/var/lib/kubelet/pods/60215267-c80f-470a-8451-d81555a7a6d0/volumes"
	Sep 15 06:49:48 addons-291300 kubelet[2557]: I0915 06:49:48.496761    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de678f7e-df62-42d6-93cf-602a65c22ff9" path="/var/lib/kubelet/pods/de678f7e-df62-42d6-93cf-602a65c22ff9/volumes"
	Sep 15 06:49:48 addons-291300 kubelet[2557]: I0915 06:49:48.828778    2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="headlamp/headlamp-57fb76fcdb-g6chv" podStartSLOduration=2.452040828 podStartE2EDuration="15.82875645s" podCreationTimestamp="2024-09-15 06:49:33 +0000 UTC" firstStartedPulling="2024-09-15 06:49:34.910485377 +0000 UTC m=+976.749404297" lastFinishedPulling="2024-09-15 06:49:48.287200899 +0000 UTC m=+990.126119919" observedRunningTime="2024-09-15 06:49:48.827644909 +0000 UTC m=+990.666563829" watchObservedRunningTime="2024-09-15 06:49:48.82875645 +0000 UTC m=+990.667675470"
	Sep 15 06:49:49 addons-291300 kubelet[2557]: E0915 06:49:49.481899    2557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="c837060f-a779-4144-80e1-952ce956ef89"
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.833921    2557 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwv56\" (UniqueName: \"kubernetes.io/projected/c837060f-a779-4144-80e1-952ce956ef89-kube-api-access-zwv56\") pod \"c837060f-a779-4144-80e1-952ce956ef89\" (UID: \"c837060f-a779-4144-80e1-952ce956ef89\") "
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.834042    2557 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c837060f-a779-4144-80e1-952ce956ef89-gcp-creds\") pod \"c837060f-a779-4144-80e1-952ce956ef89\" (UID: \"c837060f-a779-4144-80e1-952ce956ef89\") "
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.834323    2557 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c837060f-a779-4144-80e1-952ce956ef89-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c837060f-a779-4144-80e1-952ce956ef89" (UID: "c837060f-a779-4144-80e1-952ce956ef89"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.839701    2557 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c837060f-a779-4144-80e1-952ce956ef89-kube-api-access-zwv56" (OuterVolumeSpecName: "kube-api-access-zwv56") pod "c837060f-a779-4144-80e1-952ce956ef89" (UID: "c837060f-a779-4144-80e1-952ce956ef89"). InnerVolumeSpecName "kube-api-access-zwv56". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.934736    2557 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zwv56\" (UniqueName: \"kubernetes.io/projected/c837060f-a779-4144-80e1-952ce956ef89-kube-api-access-zwv56\") on node \"addons-291300\" DevicePath \"\""
	Sep 15 06:49:52 addons-291300 kubelet[2557]: I0915 06:49:52.934835    2557 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c837060f-a779-4144-80e1-952ce956ef89-gcp-creds\") on node \"addons-291300\" DevicePath \"\""
	Sep 15 06:49:54 addons-291300 kubelet[2557]: I0915 06:49:54.492462    2557 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c837060f-a779-4144-80e1-952ce956ef89" path="/var/lib/kubelet/pods/c837060f-a779-4144-80e1-952ce956ef89/volumes"
	
	
	==> storage-provisioner [3054a493b983] <==
	I0915 06:33:48.493019       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:33:48.590114       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:33:48.590418       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:33:49.077047       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:33:49.078050       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-291300_93fee6b2-27eb-4ed2-a39f-912a2197f128!
	I0915 06:33:49.080977       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2b535ead-3426-46aa-8463-c56fe83064fa", APIVersion:"v1", ResourceVersion:"810", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-291300_93fee6b2-27eb-4ed2-a39f-912a2197f128 became leader
	I0915 06:33:49.279573       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-291300_93fee6b2-27eb-4ed2-a39f-912a2197f128!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-291300 -n addons-291300
helpers_test.go:261: (dbg) Run:  kubectl --context addons-291300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-555tl ingress-nginx-admission-patch-5wvp8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-291300 describe pod busybox ingress-nginx-admission-create-555tl ingress-nginx-admission-patch-5wvp8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-291300 describe pod busybox ingress-nginx-admission-create-555tl ingress-nginx-admission-patch-5wvp8: exit status 1 (270.6443ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-291300/192.168.49.2
	Start Time:       Sun, 15 Sep 2024 06:40:33 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4ws5w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4ws5w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m25s                   default-scheduler  Successfully assigned default/busybox to addons-291300
	  Warning  Failed     8m13s (x6 over 9m23s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    8m1s (x4 over 9m24s)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     8m1s (x4 over 9m23s)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     8m1s (x4 over 9m23s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m13s (x22 over 9m23s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-555tl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5wvp8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-291300 describe pod busybox ingress-nginx-admission-create-555tl ingress-nginx-admission-patch-5wvp8: exit status 1
--- FAIL: TestAddons/parallel/Registry (76.69s)

                                                
                                    
x
+
TestErrorSpam/setup (65.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-382400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-382400 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 --driver=docker: (1m5.8286661s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-382400] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
- KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
- MINIKUBE_LOCATION=19644
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-382400" primary control-plane node in "nospam-382400" cluster
* Pulling base image v0.0.45-1726358845-19644 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-382400" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (65.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-804700
helpers_test.go:235: (dbg) docker inspect functional-804700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823",
	        "Created": "2024-09-15T06:52:26.786252249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 58697,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:52:27.152862534Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823/hostname",
	        "HostsPath": "/var/lib/docker/containers/cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823/hosts",
	        "LogPath": "/var/lib/docker/containers/cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823/cb41977f8665f39c8fdb8d6c1015b2fb34578459ceac98b85348951e73eff823-json.log",
	        "Name": "/functional-804700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-804700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-804700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bec00367dc8e832f29ad439868f9920762824f7faeb07f1c89fbc617da55db65-init/diff:/var/lib/docker/overlay2/088094ea3ec63a034bad03ae1c40688e7addaaacd3a78b61d75b8c492a19f093/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bec00367dc8e832f29ad439868f9920762824f7faeb07f1c89fbc617da55db65/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bec00367dc8e832f29ad439868f9920762824f7faeb07f1c89fbc617da55db65/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bec00367dc8e832f29ad439868f9920762824f7faeb07f1c89fbc617da55db65/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-804700",
	                "Source": "/var/lib/docker/volumes/functional-804700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-804700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-804700",
	                "name.minikube.sigs.k8s.io": "functional-804700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2457308114e9a71b5c4a388b3abe0109934d4ce1083242799d321468c7c5c4b9",
	            "SandboxKey": "/var/run/docker/netns/2457308114e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49866"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49867"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49868"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49869"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49870"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-804700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "54ea075526faecbf39e40d59d06d31cf82751dfc3ce74d69f867e00525fc53d4",
	                    "EndpointID": "9cbbeea094bbab2ee918b4992a994b8531721589014b4de06d3b4a9c17321c18",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-804700",
	                        "cb41977f8665"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-804700 -n functional-804700
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 logs -n 25: (2.2407868s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-382400 --log_dir                                     | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	|         | C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-382400                                            | nospam-382400     | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:51 UTC |
	| start   | -p functional-804700                                        | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:51 UTC | 15 Sep 24 06:53 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-804700                                        | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:53 UTC | 15 Sep 24 06:54 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache add                                 | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache add                                 | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache add                                 | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache add                                 | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | minikube-local-cache-test:functional-804700                 |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache delete                              | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | minikube-local-cache-test:functional-804700                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	| ssh     | functional-804700 ssh sudo                                  | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-804700                                           | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-804700 ssh                                       | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-804700 cache reload                              | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	| ssh     | functional-804700 ssh                                       | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-804700 kubectl --                                | functional-804700 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:54 UTC | 15 Sep 24 06:54 UTC |
	|         | --context functional-804700                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:53:31
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:53:31.810008    7988 out.go:345] Setting OutFile to fd 1020 ...
	I0915 06:53:31.895232    7988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:53:31.895232    7988 out.go:358] Setting ErrFile to fd 988...
	I0915 06:53:31.895232    7988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:53:31.917158    7988 out.go:352] Setting JSON to false
	I0915 06:53:31.920492    7988 start.go:129] hostinfo: {"hostname":"minikube2","uptime":6985,"bootTime":1726376226,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:53:31.920492    7988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:53:31.925099    7988 out.go:177] * [functional-804700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:53:31.927795    7988 notify.go:220] Checking for updates...
	I0915 06:53:31.929630    7988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:53:31.932412    7988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:53:31.935515    7988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:53:31.938155    7988 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:53:31.943089    7988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:53:31.946830    7988 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:53:31.946892    7988 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:53:32.172370    7988 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:53:32.179676    7988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:53:32.512348    7988 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-15 06:53:32.481910311 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:53:32.517250    7988 out.go:177] * Using the docker driver based on existing profile
	I0915 06:53:32.519647    7988 start.go:297] selected driver: docker
	I0915 06:53:32.519703    7988 start.go:901] validating driver "docker" against &{Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:53:32.519814    7988 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:53:32.536021    7988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:53:32.874328    7988 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-15 06:53:32.841579455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:53:32.986434    7988 cni.go:84] Creating CNI manager for ""
	I0915 06:53:32.986434    7988 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:53:32.986626    7988 start.go:340] cluster config:
	{Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:53:32.989703    7988 out.go:177] * Starting "functional-804700" primary control-plane node in "functional-804700" cluster
	I0915 06:53:32.993708    7988 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:53:32.996243    7988 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:53:33.002063    7988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:53:33.002063    7988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:53:33.002063    7988 preload.go:146] Found local preload: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:53:33.002063    7988 cache.go:56] Caching tarball of preloaded images
	I0915 06:53:33.002063    7988 preload.go:172] Found C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0915 06:53:33.002063    7988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0915 06:53:33.002063    7988 profile.go:143] Saving config to C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\config.json ...
	W0915 06:53:33.112544    7988 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0915 06:53:33.112544    7988 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:53:33.112544    7988 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:53:33.112544    7988 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:53:33.112544    7988 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:53:33.112544    7988 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:53:33.112544    7988 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:53:33.113473    7988 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:53:33.113473    7988 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:53:33.113473    7988 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:53:33.125485    7988 image.go:273] response: 
	I0915 06:53:33.480491    7988 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:53:33.480491    7988 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:53:33.480491    7988 start.go:360] acquireMachinesLock for functional-804700: {Name:mkb5f27df19ba3c17c295b7094229618fd33ccf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:53:33.480491    7988 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-804700"
	I0915 06:53:33.480491    7988 start.go:96] Skipping create...Using existing machine configuration
	I0915 06:53:33.480491    7988 fix.go:54] fixHost starting: 
	I0915 06:53:33.495486    7988 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
	I0915 06:53:33.582763    7988 fix.go:112] recreateIfNeeded on functional-804700: state=Running err=<nil>
	W0915 06:53:33.582763    7988 fix.go:138] unexpected machine state, will restart: <nil>
	I0915 06:53:33.586766    7988 out.go:177] * Updating the running docker "functional-804700" container ...
	I0915 06:53:33.588771    7988 machine.go:93] provisionDockerMachine start ...
	I0915 06:53:33.595763    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:33.687415    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:33.688490    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:33.688490    7988 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:53:33.884226    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804700
	
	I0915 06:53:33.884226    7988 ubuntu.go:169] provisioning hostname "functional-804700"
	I0915 06:53:33.892990    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:33.973810    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:33.974216    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:33.974216    7988 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-804700 && echo "functional-804700" | sudo tee /etc/hostname
	I0915 06:53:34.200083    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-804700
	
	I0915 06:53:34.212922    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:34.300526    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:34.301182    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:34.301232    7988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-804700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-804700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-804700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:53:34.505465    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:53:34.505465    7988 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube2\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube2\minikube-integration\.minikube}
	I0915 06:53:34.506091    7988 ubuntu.go:177] setting up certificates
	I0915 06:53:34.506127    7988 provision.go:84] configureAuth start
	I0915 06:53:34.514910    7988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-804700
	I0915 06:53:34.599343    7988 provision.go:143] copyHostCerts
	I0915 06:53:34.599956    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem
	I0915 06:53:34.600203    7988 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem, removing ...
	I0915 06:53:34.600203    7988 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.pem
	I0915 06:53:34.600834    7988 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0915 06:53:34.602198    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem
	I0915 06:53:34.604115    7988 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem, removing ...
	I0915 06:53:34.604115    7988 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cert.pem
	I0915 06:53:34.604693    7988 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0915 06:53:34.606637    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem
	I0915 06:53:34.606637    7988 exec_runner.go:144] found C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem, removing ...
	I0915 06:53:34.607241    7988 exec_runner.go:203] rm: C:\Users\jenkins.minikube2\minikube-integration\.minikube\key.pem
	I0915 06:53:34.607272    7988 exec_runner.go:151] cp: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube2\minikube-integration\.minikube/key.pem (1679 bytes)
	I0915 06:53:34.609155    7988 provision.go:117] generating server cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-804700 san=[127.0.0.1 192.168.49.2 functional-804700 localhost minikube]
	I0915 06:53:34.766909    7988 provision.go:177] copyRemoteCerts
	I0915 06:53:34.777859    7988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:53:34.784916    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:34.863467    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:34.993503    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0915 06:53:34.994130    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 06:53:35.046074    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0915 06:53:35.046074    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0915 06:53:35.090978    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0915 06:53:35.091511    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0915 06:53:35.143067    7988 provision.go:87] duration metric: took 636.9ms to configureAuth
	I0915 06:53:35.143067    7988 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:53:35.144448    7988 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:53:35.155633    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:35.234608    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:35.235056    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:35.235056    7988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0915 06:53:35.435412    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0915 06:53:35.435412    7988 ubuntu.go:71] root file system type: overlay
	I0915 06:53:35.436599    7988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0915 06:53:35.445878    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:35.532586    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:35.533375    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:35.533573    7988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0915 06:53:35.755210    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0915 06:53:35.762260    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:35.840818    7988 main.go:141] libmachine: Using SSH client type: native
	I0915 06:53:35.840896    7988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfc9a00] 0xfcc540 <nil>  [] 0s} 127.0.0.1 49866 <nil> <nil>}
	I0915 06:53:35.840896    7988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0915 06:53:36.044225    7988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:53:36.044225    7988 machine.go:96] duration metric: took 2.455433s to provisionDockerMachine
	I0915 06:53:36.044367    7988 start.go:293] postStartSetup for "functional-804700" (driver="docker")
	I0915 06:53:36.044367    7988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:53:36.056184    7988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:53:36.063082    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:36.135779    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:36.276974    7988 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:53:36.288804    7988 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0915 06:53:36.288804    7988 command_runner.go:130] > NAME="Ubuntu"
	I0915 06:53:36.288804    7988 command_runner.go:130] > VERSION_ID="22.04"
	I0915 06:53:36.288804    7988 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0915 06:53:36.288804    7988 command_runner.go:130] > VERSION_CODENAME=jammy
	I0915 06:53:36.288804    7988 command_runner.go:130] > ID=ubuntu
	I0915 06:53:36.288804    7988 command_runner.go:130] > ID_LIKE=debian
	I0915 06:53:36.288804    7988 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0915 06:53:36.288804    7988 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0915 06:53:36.288804    7988 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0915 06:53:36.288804    7988 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0915 06:53:36.288804    7988 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0915 06:53:36.288804    7988 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:53:36.288804    7988 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:53:36.288804    7988 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:53:36.288804    7988 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:53:36.288804    7988 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\addons for local assets ...
	I0915 06:53:36.288804    7988 filesync.go:126] Scanning C:\Users\jenkins.minikube2\minikube-integration\.minikube\files for local assets ...
	I0915 06:53:36.290880    7988 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem -> 85842.pem in /etc/ssl/certs
	I0915 06:53:36.290880    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem -> /etc/ssl/certs/85842.pem
	I0915 06:53:36.292033    7988 filesync.go:149] local asset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\8584\hosts -> hosts in /etc/test/nested/copy/8584
	I0915 06:53:36.292113    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\8584\hosts -> /etc/test/nested/copy/8584/hosts
	I0915 06:53:36.304775    7988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/8584
	I0915 06:53:36.324687    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem --> /etc/ssl/certs/85842.pem (1708 bytes)
	I0915 06:53:36.371573    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\8584\hosts --> /etc/test/nested/copy/8584/hosts (40 bytes)
	I0915 06:53:36.419665    7988 start.go:296] duration metric: took 375.2952ms for postStartSetup
	I0915 06:53:36.431256    7988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:53:36.438116    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:36.516295    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:36.646265    7988 command_runner.go:130] > 1%
	I0915 06:53:36.659117    7988 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:53:36.674293    7988 command_runner.go:130] > 951G
	I0915 06:53:36.674293    7988 fix.go:56] duration metric: took 3.1937748s for fixHost
	I0915 06:53:36.674293    7988 start.go:83] releasing machines lock for "functional-804700", held for 3.1937748s
	I0915 06:53:36.683129    7988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-804700
	I0915 06:53:36.760048    7988 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0915 06:53:36.768764    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:36.772445    7988 ssh_runner.go:195] Run: cat /version.json
	I0915 06:53:36.778758    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:36.838280    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:36.846285    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:36.962173    7988 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W0915 06:53:36.962173    7988 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0915 06:53:36.970170    7988 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0915 06:53:36.982165    7988 ssh_runner.go:195] Run: systemctl --version
	I0915 06:53:36.993484    7988 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0915 06:53:36.993484    7988 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0915 06:53:37.005757    7988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:53:37.018931    7988 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0915 06:53:37.018931    7988 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0915 06:53:37.018931    7988 command_runner.go:130] > Device: 8ah/138d	Inode: 224         Links: 1
	I0915 06:53:37.018931    7988 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 06:53:37.018931    7988 command_runner.go:130] > Access: 2024-09-15 06:31:26.368484295 +0000
	I0915 06:53:37.018931    7988 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0915 06:53:37.018931    7988 command_runner.go:130] > Change: 2024-09-15 06:30:48.145714497 +0000
	I0915 06:53:37.018931    7988 command_runner.go:130] >  Birth: 2024-09-15 06:30:48.145714497 +0000
	I0915 06:53:37.030606    7988 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0915 06:53:37.051278    7988 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0915 06:53:37.053478    7988 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0915 06:53:37.064887    7988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0915 06:53:37.073207    7988 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0915 06:53:37.073207    7988 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0915 06:53:37.088167    7988 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0915 06:53:37.088167    7988 start.go:495] detecting cgroup driver to use...
	I0915 06:53:37.088167    7988 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:53:37.088167    7988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:53:37.123856    7988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0915 06:53:37.140225    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 06:53:37.177947    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 06:53:37.199246    7988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 06:53:37.210246    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 06:53:37.242088    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:53:37.275822    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 06:53:37.311248    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:53:37.351994    7988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:53:37.382851    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 06:53:37.422526    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 06:53:37.458220    7988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 06:53:37.492256    7988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:53:37.512352    7988 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0915 06:53:37.523579    7988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:53:37.558351    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:37.770703    7988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 06:53:48.157579    7988 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.386788s)
	I0915 06:53:48.157579    7988 start.go:495] detecting cgroup driver to use...
	I0915 06:53:48.157579    7988 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:53:48.171457    7988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0915 06:53:48.197853    7988 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0915 06:53:48.197853    7988 command_runner.go:130] > [Unit]
	I0915 06:53:48.197853    7988 command_runner.go:130] > Description=Docker Application Container Engine
	I0915 06:53:48.197853    7988 command_runner.go:130] > Documentation=https://docs.docker.com
	I0915 06:53:48.197853    7988 command_runner.go:130] > BindsTo=containerd.service
	I0915 06:53:48.197853    7988 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0915 06:53:48.197853    7988 command_runner.go:130] > Wants=network-online.target
	I0915 06:53:48.197853    7988 command_runner.go:130] > Requires=docker.socket
	I0915 06:53:48.197853    7988 command_runner.go:130] > StartLimitBurst=3
	I0915 06:53:48.197853    7988 command_runner.go:130] > StartLimitIntervalSec=60
	I0915 06:53:48.197853    7988 command_runner.go:130] > [Service]
	I0915 06:53:48.197853    7988 command_runner.go:130] > Type=notify
	I0915 06:53:48.197853    7988 command_runner.go:130] > Restart=on-failure
	I0915 06:53:48.197853    7988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0915 06:53:48.197853    7988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0915 06:53:48.197853    7988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0915 06:53:48.197853    7988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0915 06:53:48.197853    7988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0915 06:53:48.197853    7988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0915 06:53:48.197853    7988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0915 06:53:48.197853    7988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0915 06:53:48.197853    7988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0915 06:53:48.197853    7988 command_runner.go:130] > ExecStart=
	I0915 06:53:48.197853    7988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0915 06:53:48.197853    7988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0915 06:53:48.197853    7988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0915 06:53:48.197853    7988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0915 06:53:48.197853    7988 command_runner.go:130] > LimitNOFILE=infinity
	I0915 06:53:48.197853    7988 command_runner.go:130] > LimitNPROC=infinity
	I0915 06:53:48.197853    7988 command_runner.go:130] > LimitCORE=infinity
	I0915 06:53:48.198714    7988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0915 06:53:48.198714    7988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0915 06:53:48.198714    7988 command_runner.go:130] > TasksMax=infinity
	I0915 06:53:48.198714    7988 command_runner.go:130] > TimeoutStartSec=0
	I0915 06:53:48.198714    7988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0915 06:53:48.198714    7988 command_runner.go:130] > Delegate=yes
	I0915 06:53:48.198714    7988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0915 06:53:48.198714    7988 command_runner.go:130] > KillMode=process
	I0915 06:53:48.198714    7988 command_runner.go:130] > [Install]
	I0915 06:53:48.198714    7988 command_runner.go:130] > WantedBy=multi-user.target
	I0915 06:53:48.198714    7988 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0915 06:53:48.214429    7988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 06:53:48.239819    7988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:53:48.276573    7988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0915 06:53:48.292821    7988 ssh_runner.go:195] Run: which cri-dockerd
	I0915 06:53:48.306003    7988 command_runner.go:130] > /usr/bin/cri-dockerd
	I0915 06:53:48.320896    7988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0915 06:53:48.343759    7988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0915 06:53:48.397609    7988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0915 06:53:48.614215    7988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0915 06:53:48.794466    7988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0915 06:53:48.795107    7988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0915 06:53:48.843007    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:49.045447    7988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0915 06:53:50.007239    7988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0915 06:53:50.053648    7988 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0915 06:53:50.102770    7988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:53:50.139595    7988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0915 06:53:50.301367    7988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0915 06:53:50.459821    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:50.620111    7988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0915 06:53:50.659808    7988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0915 06:53:50.695311    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:50.878021    7988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0915 06:53:51.045498    7988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0915 06:53:51.058715    7988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0915 06:53:51.070870    7988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0915 06:53:51.070870    7988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0915 06:53:51.070870    7988 command_runner.go:130] > Device: 93h/147d	Inode: 737         Links: 1
	I0915 06:53:51.070870    7988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0915 06:53:51.070870    7988 command_runner.go:130] > Access: 2024-09-15 06:53:50.892932191 +0000
	I0915 06:53:51.070870    7988 command_runner.go:130] > Modify: 2024-09-15 06:53:50.892932191 +0000
	I0915 06:53:51.070870    7988 command_runner.go:130] > Change: 2024-09-15 06:53:50.892932191 +0000
	I0915 06:53:51.070870    7988 command_runner.go:130] >  Birth: -
	I0915 06:53:51.070870    7988 start.go:563] Will wait 60s for crictl version
	I0915 06:53:51.082019    7988 ssh_runner.go:195] Run: which crictl
	I0915 06:53:51.091456    7988 command_runner.go:130] > /usr/bin/crictl
	I0915 06:53:51.102870    7988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:53:51.175304    7988 command_runner.go:130] > Version:  0.1.0
	I0915 06:53:51.175304    7988 command_runner.go:130] > RuntimeName:  docker
	I0915 06:53:51.175304    7988 command_runner.go:130] > RuntimeVersion:  27.2.1
	I0915 06:53:51.175304    7988 command_runner.go:130] > RuntimeApiVersion:  v1
	I0915 06:53:51.175304    7988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0915 06:53:51.183443    7988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:53:51.234516    7988 command_runner.go:130] > 27.2.1
	I0915 06:53:51.242540    7988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0915 06:53:51.299361    7988 command_runner.go:130] > 27.2.1
	I0915 06:53:51.304812    7988 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0915 06:53:51.313255    7988 cli_runner.go:164] Run: docker exec -t functional-804700 dig +short host.docker.internal
	I0915 06:53:51.490689    7988 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0915 06:53:51.500661    7988 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0915 06:53:51.513204    7988 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0915 06:53:51.521036    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:51.589168    7988 kubeadm.go:883] updating cluster {Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:53:51.589168    7988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:53:51.598908    7988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:53:51.641968    7988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0915 06:53:51.642064    7988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0915 06:53:51.642064    7988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 06:53:51.642064    7988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0915 06:53:51.642064    7988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0915 06:53:51.642064    7988 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0915 06:53:51.642225    7988 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0915 06:53:51.642225    7988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:53:51.642317    7988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:53:51.642357    7988 docker.go:615] Images already preloaded, skipping extraction
	I0915 06:53:51.649418    7988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0915 06:53:51.693877    7988 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0915 06:53:51.693877    7988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:53:51.693877    7988 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0915 06:53:51.693877    7988 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:53:51.693877    7988 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 docker true true} ...
	I0915 06:53:51.694525    7988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-804700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:53:51.702529    7988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0915 06:53:51.795664    7988 command_runner.go:130] > cgroupfs
	I0915 06:53:51.795664    7988 cni.go:84] Creating CNI manager for ""
	I0915 06:53:51.795664    7988 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:53:51.795664    7988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:53:51.795664    7988 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-804700 NodeName:functional-804700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:53:51.795664    7988 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-804700"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:53:51.810566    7988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:53:51.834263    7988 command_runner.go:130] > kubeadm
	I0915 06:53:51.834263    7988 command_runner.go:130] > kubectl
	I0915 06:53:51.834263    7988 command_runner.go:130] > kubelet
	I0915 06:53:51.834263    7988 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:53:51.846253    7988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:53:51.867973    7988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0915 06:53:51.905117    7988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:53:51.939457    7988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0915 06:53:51.988038    7988 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:53:52.001024    7988 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0915 06:53:52.012447    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:52.181200    7988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:53:52.207763    7988 certs.go:68] Setting up C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700 for IP: 192.168.49.2
	I0915 06:53:52.207812    7988 certs.go:194] generating shared ca certs ...
	I0915 06:53:52.207812    7988 certs.go:226] acquiring lock for ca certs: {Name:mka39b35711ce17aa627001b408a7adb2f266bbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:53:52.208398    7988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key
	I0915 06:53:52.208877    7988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key
	I0915 06:53:52.209068    7988 certs.go:256] generating profile certs ...
	I0915 06:53:52.209618    7988 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\client.key
	I0915 06:53:52.209618    7988 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\apiserver.key.37902c92
	I0915 06:53:52.209618    7988 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\proxy-client.key
	I0915 06:53:52.210529    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0915 06:53:52.210873    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0915 06:53:52.211057    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0915 06:53:52.211254    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0915 06:53:52.211376    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0915 06:53:52.211376    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0915 06:53:52.211376    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0915 06:53:52.211376    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0915 06:53:52.212328    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\8584.pem (1338 bytes)
	W0915 06:53:52.212646    7988 certs.go:480] ignoring C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\8584_empty.pem, impossibly tiny 0 bytes
	I0915 06:53:52.212823    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0915 06:53:52.213227    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0915 06:53:52.213227    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0915 06:53:52.213227    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0915 06:53:52.214135    7988 certs.go:484] found cert: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem (1708 bytes)
	I0915 06:53:52.214135    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem -> /usr/share/ca-certificates/85842.pem
	I0915 06:53:52.214135    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:53:52.214678    7988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\8584.pem -> /usr/share/ca-certificates/8584.pem
	I0915 06:53:52.216179    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:53:52.266263    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:53:52.314060    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:53:52.357925    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0915 06:53:52.400298    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0915 06:53:52.448638    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0915 06:53:52.494646    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:53:52.543591    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\functional-804700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:53:52.590598    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\ssl\certs\85842.pem --> /usr/share/ca-certificates/85842.pem (1708 bytes)
	I0915 06:53:52.634244    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:53:52.677264    7988 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\minikube-integration\.minikube\certs\8584.pem --> /usr/share/ca-certificates/8584.pem (1338 bytes)
	I0915 06:53:52.723899    7988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:53:52.772682    7988 ssh_runner.go:195] Run: openssl version
	I0915 06:53:52.787214    7988 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0915 06:53:52.799910    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/85842.pem && ln -fs /usr/share/ca-certificates/85842.pem /etc/ssl/certs/85842.pem"
	I0915 06:53:52.833556    7988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/85842.pem
	I0915 06:53:52.844324    7988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 15 06:51 /usr/share/ca-certificates/85842.pem
	I0915 06:53:52.844324    7988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 15 06:51 /usr/share/ca-certificates/85842.pem
	I0915 06:53:52.856020    7988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/85842.pem
	I0915 06:53:52.872956    7988 command_runner.go:130] > 3ec20f2e
	I0915 06:53:52.884876    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/85842.pem /etc/ssl/certs/3ec20f2e.0"
	I0915 06:53:52.919458    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:53:52.953269    7988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:53:52.965773    7988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 15 06:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:53:52.965773    7988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:33 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:53:52.978312    7988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:53:52.994962    7988 command_runner.go:130] > b5213941
	I0915 06:53:53.006124    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:53:53.039318    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8584.pem && ln -fs /usr/share/ca-certificates/8584.pem /etc/ssl/certs/8584.pem"
	I0915 06:53:53.073457    7988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8584.pem
	I0915 06:53:53.084979    7988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 15 06:51 /usr/share/ca-certificates/8584.pem
	I0915 06:53:53.084979    7988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 15 06:51 /usr/share/ca-certificates/8584.pem
	I0915 06:53:53.098560    7988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8584.pem
	I0915 06:53:53.113926    7988 command_runner.go:130] > 51391683
	I0915 06:53:53.126492    7988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8584.pem /etc/ssl/certs/51391683.0"
	I0915 06:53:53.158459    7988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:53:53.171378    7988 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:53:53.171378    7988 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0915 06:53:53.171378    7988 command_runner.go:130] > Device: 830h/2096d	Inode: 17030       Links: 1
	I0915 06:53:53.171378    7988 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0915 06:53:53.171378    7988 command_runner.go:130] > Access: 2024-09-15 06:52:43.539580440 +0000
	I0915 06:53:53.171378    7988 command_runner.go:130] > Modify: 2024-09-15 06:52:43.539580440 +0000
	I0915 06:53:53.171378    7988 command_runner.go:130] > Change: 2024-09-15 06:52:43.539580440 +0000
	I0915 06:53:53.171378    7988 command_runner.go:130] >  Birth: 2024-09-15 06:52:43.539580440 +0000
	I0915 06:53:53.183224    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0915 06:53:53.198228    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.212524    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0915 06:53:53.229121    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.240435    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0915 06:53:53.257013    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.267590    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0915 06:53:53.283997    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.296097    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0915 06:53:53.311002    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.325735    7988 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0915 06:53:53.340608    7988 command_runner.go:130] > Certificate will not expire
	I0915 06:53:53.340608    7988 kubeadm.go:392] StartCluster: {Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:53:53.349935    7988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0915 06:53:53.408256    7988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:53:53.429003    7988 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0915 06:53:53.429003    7988 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0915 06:53:53.429003    7988 command_runner.go:130] > /var/lib/minikube/etcd:
	I0915 06:53:53.429003    7988 command_runner.go:130] > member
	I0915 06:53:53.430233    7988 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0915 06:53:53.430233    7988 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0915 06:53:53.442184    7988 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0915 06:53:53.463478    7988 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0915 06:53:53.471125    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:53.548800    7988 kubeconfig.go:125] found "functional-804700" server: "https://127.0.0.1:49870"
	I0915 06:53:53.550085    7988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:53:53.551133    7988 kapi.go:59] client config for functional-804700: &rest.Config{Host:"https://127.0.0.1:49870", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26a3d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 06:53:53.552791    7988 cert_rotation.go:140] Starting client certificate rotation controller
	I0915 06:53:53.567510    7988 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0915 06:53:53.590647    7988 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0915 06:53:53.590647    7988 kubeadm.go:597] duration metric: took 160.4128ms to restartPrimaryControlPlane
	I0915 06:53:53.590647    7988 kubeadm.go:394] duration metric: took 250.0367ms to StartCluster
	I0915 06:53:53.590647    7988 settings.go:142] acquiring lock: {Name:mk9684611c6005d251a6ecf406b4611c2c1e30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:53:53.590647    7988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:53:53.591952    7988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube2\minikube-integration\kubeconfig: {Name:mk7e72b8b9c82f9d87d6aed6af6962a1c1fa489d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:53:53.593544    7988 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0915 06:53:53.593544    7988 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0915 06:53:53.593544    7988 addons.go:69] Setting storage-provisioner=true in profile "functional-804700"
	I0915 06:53:53.593544    7988 addons.go:69] Setting default-storageclass=true in profile "functional-804700"
	I0915 06:53:53.593544    7988 addons.go:234] Setting addon storage-provisioner=true in "functional-804700"
	I0915 06:53:53.593544    7988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-804700"
	W0915 06:53:53.593544    7988 addons.go:243] addon storage-provisioner should already be in state true
	I0915 06:53:53.593544    7988 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:53:53.594080    7988 host.go:66] Checking if "functional-804700" exists ...
	I0915 06:53:53.597690    7988 out.go:177] * Verifying Kubernetes components...
	I0915 06:53:53.612225    7988 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
	I0915 06:53:53.613218    7988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:53:53.614217    7988 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
	I0915 06:53:53.698475    7988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:53:53.701473    7988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:53.701473    7988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:53:53.705661    7988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:53:53.706561    7988 kapi.go:59] client config for functional-804700: &rest.Config{Host:"https://127.0.0.1:49870", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.key", CAFile:"C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x26a3d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0915 06:53:53.707683    7988 addons.go:234] Setting addon default-storageclass=true in "functional-804700"
	W0915 06:53:53.707781    7988 addons.go:243] addon default-storageclass should already be in state true
	I0915 06:53:53.707833    7988 host.go:66] Checking if "functional-804700" exists ...
	I0915 06:53:53.713735    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:53.726424    7988 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
	I0915 06:53:53.785153    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:53.794033    7988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:53.794033    7988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:53:53.804040    7988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:53:53.804040    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:53.848431    7988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-804700
	I0915 06:53:53.882456    7988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
	I0915 06:53:53.925228    7988 node_ready.go:35] waiting up to 6m0s for node "functional-804700" to be "Ready" ...
	I0915 06:53:53.925285    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:53.925285    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:53.925285    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:53.925285    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:53:53.930404    7988 round_trippers.go:574] Response Status:  in 5 milliseconds
	I0915 06:53:53.930563    7988 round_trippers.go:577] Response Headers:
	I0915 06:53:53.968219    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:54.147310    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:54.251366    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:54.257499    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.257591    7988 retry.go:31] will retry after 293.498556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.333534    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:54.341270    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.341270    7988 retry.go:31] will retry after 195.65099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.552163    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:54.563168    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:54.832939    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.839729    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:54.839729    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.839860    7988 retry.go:31] will retry after 321.640265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:54.839893    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.839950    7988 retry.go:31] will retry after 420.221035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:54.930990    7988 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:54.931535    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:54.931535    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:54.931535    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:54.931916    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:53:54.936807    7988 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0915 06:53:54.936807    7988 round_trippers.go:577] Response Headers:
	I0915 06:53:55.174653    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:55.272976    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:55.934811    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:55.937353    7988 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:55.937500    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:55.937500    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:55.937500    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:55.937500    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	W0915 06:53:55.939976    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:55.940266    7988 retry.go:31] will retry after 540.725025ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:55.942827    7988 round_trippers.go:574] Response Status:  in 5 milliseconds
	I0915 06:53:55.942827    7988 round_trippers.go:577] Response Headers:
	I0915 06:53:56.052085    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:56.059211    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:56.059432    7988 retry.go:31] will retry after 843.456816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:56.492817    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:56.916773    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:56.943530    7988 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:56.943665    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:56.943665    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:56.943665    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:56.943665    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:53:56.946590    7988 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0915 06:53:56.946590    7988 round_trippers.go:577] Response Headers:
	I0915 06:53:57.335012    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:57.338842    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:57.338842    7988 retry.go:31] will retry after 836.942149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:57.848984    7988 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0915 06:53:57.849657    7988 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:57.850209    7988 retry.go:31] will retry after 921.50411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0915 06:53:57.947063    7988 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:57.947063    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:57.947331    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:57.947393    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:57.947393    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:53:57.952507    7988 round_trippers.go:574] Response Status:  in 5 milliseconds
	I0915 06:53:57.952548    7988 round_trippers.go:577] Response Headers:
	I0915 06:53:58.189273    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:53:58.785424    7988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:53:58.953014    7988 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:58.953427    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:53:58.953427    7988 round_trippers.go:469] Request Headers:
	I0915 06:53:58.953487    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:53:58.953566    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.039660    7988 round_trippers.go:574] Response Status: 200 OK in 3085 milliseconds
	I0915 06:54:02.039709    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.039709    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.039709    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.039844    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0915 06:54:02.039933    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0915 06:54:02.039933    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.039933    7988 round_trippers.go:580]     Audit-Id: 455cac9f-525a-4db7-aa06-fd04b803c85a
	I0915 06:54:02.040230    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:02.041820    7988 node_ready.go:49] node "functional-804700" has status "Ready":"True"
	I0915 06:54:02.041926    7988 node_ready.go:38] duration metric: took 8.1164663s for node "functional-804700" to be "Ready" ...
	I0915 06:54:02.041926    7988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:54:02.042211    7988 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0915 06:54:02.042266    7988 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0915 06:54:02.042564    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods
	I0915 06:54:02.042564    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.042620    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.042620    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.139830    7988 round_trippers.go:574] Response Status: 200 OK in 97 milliseconds
	I0915 06:54:02.139943    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.139943    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0915 06:54:02.139943    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0915 06:54:02.139943    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.140046    7988 round_trippers.go:580]     Audit-Id: b05b1d4e-f970-47d1-9852-ab4a6c16f807
	I0915 06:54:02.140046    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.140046    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.142512    7988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"464"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8m5z","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"974817d7-07c9-4087-9ebc-2ad96b730334","resourceVersion":"453","creationTimestamp":"2024-09-15T06:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"f049e883-50a7-43be-8441-d3e2d1888fa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f049e883-50a7-43be-8441-d3e2d1888fa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51630 chars]
	I0915 06:54:02.149092    7988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j8m5z" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:02.150166    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8m5z
	I0915 06:54:02.150166    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.150166    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.150166    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.246551    7988 round_trippers.go:574] Response Status: 200 OK in 96 milliseconds
	I0915 06:54:02.246551    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.246551    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.246551    7988 round_trippers.go:580]     Audit-Id: 4240a7a2-a447-44ee-ab14-6e64b5a01bd2
	I0915 06:54:02.246551    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.246551    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.246551    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0915 06:54:02.246551    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0915 06:54:02.342921    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-j8m5z","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"974817d7-07c9-4087-9ebc-2ad96b730334","resourceVersion":"453","creationTimestamp":"2024-09-15T06:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"f049e883-50a7-43be-8441-d3e2d1888fa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f049e883-50a7-43be-8441-d3e2d1888fa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6495 chars]
	I0915 06:54:02.344104    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:02.344104    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.344104    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.344104    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.538156    7988 round_trippers.go:574] Response Status: 200 OK in 193 milliseconds
	I0915 06:54:02.538156    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.538236    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:02.538236    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.538236    7988 round_trippers.go:580]     Audit-Id: 3591b7f1-514c-46af-a9a0-fd6b0eae6ce0
	I0915 06:54:02.538236    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.538236    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.538236    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:02.538554    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:02.539079    7988 pod_ready.go:93] pod "coredns-7c65d6cfc9-j8m5z" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:02.539185    7988 pod_ready.go:82] duration metric: took 389.1044ms for pod "coredns-7c65d6cfc9-j8m5z" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:02.539185    7988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:02.539185    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:02.539491    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.539527    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.539527    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.546740    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:02.546740    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.546740    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.546740    7988 round_trippers.go:580]     Audit-Id: 834b8c84-ba51-4e67-8422-85bee153d401
	I0915 06:54:02.546740    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.546740    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.546740    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:02.546740    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:02.547071    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:02.547887    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:02.547887    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.547887    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.547887    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.552915    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:02.553939    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.553939    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.553939    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.553939    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:02.553939    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:02.554103    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.554103    7988 round_trippers.go:580]     Audit-Id: 675195c1-9d7f-4d18-8f8c-7ccc891eafb4
	I0915 06:54:02.554249    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:02.658611    7988 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0915 06:54:02.658983    7988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.4693911s)
	I0915 06:54:02.659241    7988 round_trippers.go:463] GET https://127.0.0.1:49870/apis/storage.k8s.io/v1/storageclasses
	I0915 06:54:02.659241    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.659241    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.659241    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.734093    7988 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0915 06:54:02.734093    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.734093    7988 round_trippers.go:580]     Audit-Id: f77b786e-3e6d-42ba-8d8c-2ab41c7e5f76
	I0915 06:54:02.734093    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.734093    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.734647    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:02.734647    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:02.734647    7988 round_trippers.go:580]     Content-Length: 1273
	I0915 06:54:02.734647    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.734704    7988 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"469"},"items":[{"metadata":{"name":"standard","uid":"41b9725b-d36d-49fd-ae7d-dbfc589f198d","resourceVersion":"377","creationTimestamp":"2024-09-15T06:53:02Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-15T06:53:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0915 06:54:02.736156    7988 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"41b9725b-d36d-49fd-ae7d-dbfc589f198d","resourceVersion":"377","creationTimestamp":"2024-09-15T06:53:02Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-15T06:53:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0915 06:54:02.736245    7988 round_trippers.go:463] PUT https://127.0.0.1:49870/apis/storage.k8s.io/v1/storageclasses/standard
	I0915 06:54:02.736245    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:02.736334    7988 round_trippers.go:473]     Content-Type: application/json
	I0915 06:54:02.736334    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:02.736334    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:02.754719    7988 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0915 06:54:02.754719    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:02.754719    7988 round_trippers.go:580]     Audit-Id: 28aa0dc0-c8d5-42b8-8c2f-8aa3737ff11b
	I0915 06:54:02.754719    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:02.754719    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:02.755313    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:02.755313    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:02.755313    7988 round_trippers.go:580]     Content-Length: 1220
	I0915 06:54:02.755432    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:02 GMT
	I0915 06:54:02.755605    7988 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"41b9725b-d36d-49fd-ae7d-dbfc589f198d","resourceVersion":"377","creationTimestamp":"2024-09-15T06:53:02Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-15T06:53:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0915 06:54:03.040610    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:03.040610    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:03.040733    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:03.040733    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:03.049491    7988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 06:54:03.049491    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:03.049491    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:03.049491    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:03.049491    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:03.049491    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:03.049491    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:03 GMT
	I0915 06:54:03.049491    7988 round_trippers.go:580]     Audit-Id: 33b45e67-216d-49bc-a83a-2cf1a4a1ef7a
	I0915 06:54:03.050436    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:03.051335    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:03.051399    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:03.051399    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:03.051399    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:03.058461    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:03.058461    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:03.059015    7988 round_trippers.go:580]     Audit-Id: 6cdf8642-8fa1-42ab-9630-63161095a88a
	I0915 06:54:03.059015    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:03.059015    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:03.059087    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:03.059087    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:03.059087    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:03 GMT
	I0915 06:54:03.059444    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:03.539679    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:03.539679    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:03.539679    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:03.539679    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:03.547696    7988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 06:54:03.547838    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:03.547866    7988 round_trippers.go:580]     Audit-Id: 8fc5bea8-5ef6-445e-b604-375369eb1642
	I0915 06:54:03.547866    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:03.547866    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:03.547866    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:03.547866    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:03.547866    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:03 GMT
	I0915 06:54:03.548417    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:03.549490    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:03.549580    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:03.549580    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:03.549580    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:03.554451    7988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 06:54:03.554451    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:03.554451    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:03.554451    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:03.554451    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:03.554451    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:03 GMT
	I0915 06:54:03.554451    7988 round_trippers.go:580]     Audit-Id: fda05e78-b187-4432-bf22-dd17808ad8ff
	I0915 06:54:03.554451    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:03.555456    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:04.040734    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:04.040734    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:04.040734    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:04.040734    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:04.046680    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:04.046777    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:04.046777    7988 round_trippers.go:580]     Audit-Id: aa527364-fb6d-40ab-bb6e-2f6d2a70c348
	I0915 06:54:04.046899    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:04.046899    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:04.046899    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:04.046899    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:04.046899    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:04 GMT
	I0915 06:54:04.047316    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:04.048357    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:04.048357    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:04.048457    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:04.048457    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:04.058348    7988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0915 06:54:04.058348    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:04.058348    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:04.058348    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:04 GMT
	I0915 06:54:04.058348    7988 round_trippers.go:580]     Audit-Id: ab82210c-86fb-4011-8d63-cca9411587f0
	I0915 06:54:04.058348    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:04.058348    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:04.058348    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:04.058348    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:04.066566    7988 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0915 06:54:04.066566    7988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0915 06:54:04.066566    7988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0915 06:54:04.066566    7988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0915 06:54:04.066566    7988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0915 06:54:04.066566    7988 command_runner.go:130] > pod/storage-provisioner configured
	I0915 06:54:04.066566    7988 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.2810969s)
	I0915 06:54:04.071079    7988 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0915 06:54:04.074135    7988 addons.go:510] duration metric: took 10.4805018s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0915 06:54:04.539524    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:04.539524    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:04.540122    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:04.540122    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:04.545745    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:04.545745    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:04.546298    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:04 GMT
	I0915 06:54:04.546298    7988 round_trippers.go:580]     Audit-Id: b76fcc39-f758-4a5a-a5a4-d0bd5a9d5811
	I0915 06:54:04.546298    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:04.546298    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:04.546298    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:04.546298    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:04.546487    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:04.547443    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:04.547472    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:04.547472    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:04.547472    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:04.554347    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:04.554347    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:04.554347    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:04 GMT
	I0915 06:54:04.554347    7988 round_trippers.go:580]     Audit-Id: d5d2b7fd-018a-46c0-95e7-11f0d8d07fe8
	I0915 06:54:04.554347    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:04.554347    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:04.554347    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:04.554347    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:04.554347    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:04.555776    7988 pod_ready.go:103] pod "etcd-functional-804700" in "kube-system" namespace has status "Ready":"False"
	I0915 06:54:05.039348    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:05.039348    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:05.039348    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:05.039348    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:05.048978    7988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0915 06:54:05.048978    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:05.049082    7988 round_trippers.go:580]     Audit-Id: bc0f976b-1bf2-44bc-959a-eeaf5d966cfe
	I0915 06:54:05.049082    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:05.049082    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:05.049082    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:05.049082    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:05.049173    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:05 GMT
	I0915 06:54:05.049234    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:05.050387    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:05.050481    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:05.050481    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:05.050481    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:05.057652    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:05.057652    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:05.057652    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:05.057652    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:05.057652    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:05.057652    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:05 GMT
	I0915 06:54:05.057652    7988 round_trippers.go:580]     Audit-Id: ad983f02-a645-41af-8cc0-89d68c2bd10e
	I0915 06:54:05.057652    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:05.057652    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:05.539255    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:05.539255    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:05.539255    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:05.539255    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:05.544777    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:05.544856    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:05.544856    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:05 GMT
	I0915 06:54:05.544904    7988 round_trippers.go:580]     Audit-Id: a4719372-8daa-4bcd-9b6b-8b224b3de4da
	I0915 06:54:05.544904    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:05.544904    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:05.544904    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:05.544904    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:05.545135    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:05.546005    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:05.546083    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:05.546083    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:05.546226    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:05.553946    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:05.553946    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:05.553946    7988 round_trippers.go:580]     Audit-Id: cfbfc347-5a03-42ef-bc9c-cde29db97641
	I0915 06:54:05.553946    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:05.553946    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:05.553946    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:05.553946    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:05.553946    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:05 GMT
	I0915 06:54:05.553946    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:06.040145    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:06.040145    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:06.040145    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:06.040145    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:06.046055    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:06.046132    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:06.046132    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:06.046132    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:06.046132    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:06 GMT
	I0915 06:54:06.046132    7988 round_trippers.go:580]     Audit-Id: ca541130-7159-4d5f-9da8-b5260bf8aaea
	I0915 06:54:06.046132    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:06.046132    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:06.046420    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:06.046644    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:06.047206    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:06.047206    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:06.047365    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:06.064737    7988 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0915 06:54:06.064803    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:06.064803    7988 round_trippers.go:580]     Audit-Id: 74085a0c-1f8b-40cb-8044-2daad48f1a80
	I0915 06:54:06.064803    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:06.064803    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:06.064865    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:06.064865    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:06.064865    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:06 GMT
	I0915 06:54:06.065093    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:06.540293    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:06.540293    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:06.540395    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:06.540395    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:06.548474    7988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 06:54:06.548474    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:06.548474    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:06 GMT
	I0915 06:54:06.548474    7988 round_trippers.go:580]     Audit-Id: a9e2b559-a7d9-43ac-9cc6-c8e9e96e38da
	I0915 06:54:06.548474    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:06.548474    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:06.548474    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:06.548474    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:06.549272    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:06.550094    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:06.550094    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:06.550094    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:06.550094    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:06.557491    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:06.557491    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:06.557491    7988 round_trippers.go:580]     Audit-Id: 685b345d-5d84-443d-8abc-bb76a7eec9d5
	I0915 06:54:06.557491    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:06.557491    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:06.557491    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:06.557491    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:06.557491    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:06 GMT
	I0915 06:54:06.558183    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:06.558183    7988 pod_ready.go:103] pod "etcd-functional-804700" in "kube-system" namespace has status "Ready":"False"
	I0915 06:54:07.041270    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:07.041396    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:07.041396    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:07.041396    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:07.047760    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:07.047760    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:07.047760    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:07 GMT
	I0915 06:54:07.047760    7988 round_trippers.go:580]     Audit-Id: fcf04695-300a-4519-8c0d-ecea8e5c05dd
	I0915 06:54:07.047760    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:07.047760    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:07.047760    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:07.047760    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:07.048293    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:07.049168    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:07.049168    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:07.049168    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:07.049168    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:07.056334    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:07.056334    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:07.056334    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:07.056334    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:07.056334    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:07 GMT
	I0915 06:54:07.056334    7988 round_trippers.go:580]     Audit-Id: c59ee4d1-6588-4802-93ed-a4e291794ea3
	I0915 06:54:07.056334    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:07.056334    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:07.057064    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:07.539801    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:07.539801    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:07.539801    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:07.539801    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:07.545943    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:07.545943    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:07.545943    7988 round_trippers.go:580]     Audit-Id: 315e29f3-bc2d-41bd-a2d8-0cc1ad6585f2
	I0915 06:54:07.545943    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:07.545943    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:07.545943    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:07.545943    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:07.545943    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:07 GMT
	I0915 06:54:07.546700    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:07.547280    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:07.547280    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:07.547280    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:07.547280    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:07.554030    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:07.554030    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:07.554030    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:07.554030    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:07.554030    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:07.554030    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:07 GMT
	I0915 06:54:07.554030    7988 round_trippers.go:580]     Audit-Id: 25204678-4196-4d11-b9c5-ad8b5105e26d
	I0915 06:54:07.554030    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:07.555080    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:08.039385    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:08.039385    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:08.039385    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:08.039385    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:08.045892    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:08.045892    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:08.045892    7988 round_trippers.go:580]     Audit-Id: 58f89a40-f6e0-4050-96af-0b1c8c6cb089
	I0915 06:54:08.045892    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:08.045892    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:08.045892    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:08.045892    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:08.045892    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:08 GMT
	I0915 06:54:08.046340    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:08.047010    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:08.047082    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:08.047131    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:08.047131    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:08.054800    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:08.054886    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:08.054886    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:08.054886    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:08.054886    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:08 GMT
	I0915 06:54:08.054886    7988 round_trippers.go:580]     Audit-Id: 4fd13517-3360-4c7a-99e2-deb11392b465
	I0915 06:54:08.054933    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:08.054933    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:08.056088    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:08.539893    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:08.539893    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:08.539893    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:08.539893    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:08.545768    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:08.545831    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:08.545831    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:08.545894    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:08.545894    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:08.545894    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:08.545894    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:08 GMT
	I0915 06:54:08.545894    7988 round_trippers.go:580]     Audit-Id: 6e6a70b2-81d9-40b7-a756-4c4a0343ca82
	I0915 06:54:08.545894    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:08.546849    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:08.546849    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:08.546849    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:08.546849    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:08.552923    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:08.552923    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:08.552923    7988 round_trippers.go:580]     Audit-Id: 2e528777-24b0-4abe-8793-ccf722040ebc
	I0915 06:54:08.552923    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:08.552923    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:08.552923    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:08.552923    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:08.552923    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:08 GMT
	I0915 06:54:08.555510    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:09.039413    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:09.039413    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:09.039413    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:09.039413    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:09.047032    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:09.047032    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:09.047032    7988 round_trippers.go:580]     Audit-Id: c3c85e4d-7ee6-4148-8b52-c1707abb7777
	I0915 06:54:09.047032    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:09.047032    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:09.047032    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:09.047032    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:09.047032    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:09 GMT
	I0915 06:54:09.047570    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:09.047960    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:09.047960    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:09.047960    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:09.047960    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:09.055060    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:09.055131    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:09.055131    7988 round_trippers.go:580]     Audit-Id: dba14e68-9169-405a-baf8-27e64e6954b2
	I0915 06:54:09.055131    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:09.055131    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:09.055172    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:09.055172    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:09.055172    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:09 GMT
	I0915 06:54:09.055172    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:09.055828    7988 pod_ready.go:103] pod "etcd-functional-804700" in "kube-system" namespace has status "Ready":"False"
	I0915 06:54:09.539391    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:09.539391    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:09.539391    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:09.539391    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:09.547553    7988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0915 06:54:09.547553    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:09.547553    7988 round_trippers.go:580]     Audit-Id: dbd79b1d-a58d-4722-be34-fa909c15b429
	I0915 06:54:09.547553    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:09.547553    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:09.547553    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:09.547553    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:09.547553    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:09 GMT
	I0915 06:54:09.549165    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:09.550973    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:09.550973    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:09.551127    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:09.551127    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:09.560856    7988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0915 06:54:09.560856    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:09.560856    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:09.560856    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:09.560856    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:09 GMT
	I0915 06:54:09.560856    7988 round_trippers.go:580]     Audit-Id: 9b1bc726-ce46-40e5-a147-911092737798
	I0915 06:54:09.560856    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:09.560856    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:09.560856    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:10.039856    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:10.039856    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:10.039856    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:10.039856    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:10.046415    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:10.046415    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:10.046415    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:10.046415    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:10.046415    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:10 GMT
	I0915 06:54:10.046415    7988 round_trippers.go:580]     Audit-Id: bd9a357a-3d4a-444a-ac65-1124bbe249a7
	I0915 06:54:10.046415    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:10.046542    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:10.046918    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:10.047211    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:10.047211    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:10.047211    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:10.047211    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:10.053925    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:10.053925    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:10.053925    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:10.053925    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:10 GMT
	I0915 06:54:10.053925    7988 round_trippers.go:580]     Audit-Id: 8fc83c94-8717-4611-923a-892b9525c6ee
	I0915 06:54:10.053925    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:10.053925    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:10.053925    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:10.053925    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:10.539896    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:10.539896    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:10.539896    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:10.539896    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:10.545939    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:10.545939    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:10.545939    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:10.545939    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:10.545939    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:10 GMT
	I0915 06:54:10.545939    7988 round_trippers.go:580]     Audit-Id: 59e9ef37-0706-4c66-8348-3e200cb3dffc
	I0915 06:54:10.545939    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:10.545939    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:10.546732    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:10.547760    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:10.547837    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:10.547837    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:10.547837    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:10.554253    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:10.554253    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:10.554253    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:10 GMT
	I0915 06:54:10.554253    7988 round_trippers.go:580]     Audit-Id: d44d4bea-d3ee-413b-9b29-c8d6fdacaacf
	I0915 06:54:10.554253    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:10.554253    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:10.554253    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:10.554253    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:10.554937    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:11.039639    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:11.039639    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:11.039639    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:11.039639    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:11.046034    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:11.046122    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:11.046122    7988 round_trippers.go:580]     Audit-Id: fcf196b9-ae79-41f9-819a-6ccb602aa202
	I0915 06:54:11.046204    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:11.046204    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:11.046204    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:11.046204    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:11.046306    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:11 GMT
	I0915 06:54:11.046682    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:11.047129    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:11.047129    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:11.047129    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:11.047129    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:11.056664    7988 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0915 06:54:11.056664    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:11.056664    7988 round_trippers.go:580]     Audit-Id: ff0895ae-43aa-47e2-b8fe-2b3afc256411
	I0915 06:54:11.056664    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:11.056664    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:11.056664    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:11.056664    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:11.056664    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:11 GMT
	I0915 06:54:11.056664    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:11.057569    7988 pod_ready.go:103] pod "etcd-functional-804700" in "kube-system" namespace has status "Ready":"False"
	I0915 06:54:11.540152    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:11.540233    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:11.540233    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:11.540233    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:11.546539    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:11.546602    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:11.546602    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:11.546602    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:11 GMT
	I0915 06:54:11.546602    7988 round_trippers.go:580]     Audit-Id: e4837102-58b9-49f5-ae09-223d9862d279
	I0915 06:54:11.546649    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:11.546649    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:11.546649    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:11.546830    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:11.547549    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:11.547697    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:11.547697    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:11.547697    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:11.553357    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:11.553431    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:11.553431    7988 round_trippers.go:580]     Audit-Id: 7489e110-ceb8-4e65-bb42-af994c475376
	I0915 06:54:11.553431    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:11.553431    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:11.553431    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:11.553431    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:11.553431    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:11 GMT
	I0915 06:54:11.553658    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.039408    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:12.039408    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.039408    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.039408    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.045870    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:12.045900    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.045900    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.045900    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.045900    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.045900    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.045900    7988 round_trippers.go:580]     Audit-Id: ad8bb017-e8bd-4610-8f60-4051a45387e0
	I0915 06:54:12.045900    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.045900    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"468","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6683 chars]
	I0915 06:54:12.046762    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.046762    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.046762    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.046762    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.053575    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:12.053621    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.053621    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.053621    7988 round_trippers.go:580]     Audit-Id: a20929ca-dbfa-4535-b054-134a4a1df88c
	I0915 06:54:12.053621    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.053621    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.053621    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.053621    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.053621    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.539403    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/etcd-functional-804700
	I0915 06:54:12.539403    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.539403    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.539403    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.545072    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.545072    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.545203    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.545203    7988 round_trippers.go:580]     Audit-Id: f6b15448-2e75-4398-8858-c1ac3968ba64
	I0915 06:54:12.545203    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.545203    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.545203    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.545203    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.545710    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-804700","namespace":"kube-system","uid":"b893c2e4-5138-4a9d-b5e0-6c0a3472a010","resourceVersion":"574","creationTimestamp":"2024-09-15T06:52:53Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.mirror":"9e1ab0e3861a4926544ba1c53916645b","kubernetes.io/config.seen":"2024-09-15T06:52:46.983087970Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6459 chars]
	I0915 06:54:12.546359    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.546359    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.546423    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.546423    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.552443    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:12.552443    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.552443    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.552443    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.552443    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.552443    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.552443    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.552443    7988 round_trippers.go:580]     Audit-Id: 2a06f25c-3513-4449-a93f-815c17ffdcc6
	I0915 06:54:12.552673    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.552673    7988 pod_ready.go:93] pod "etcd-functional-804700" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:12.552673    7988 pod_ready.go:82] duration metric: took 10.0134033s for pod "etcd-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.552673    7988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.552673    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804700
	I0915 06:54:12.553255    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.553255    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.553255    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.558521    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.558576    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.558576    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.558576    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.558576    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.558576    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.558576    7988 round_trippers.go:580]     Audit-Id: f3d09c7f-e6c7-42cd-a422-5f9e323e4001
	I0915 06:54:12.558686    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.558831    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-804700","namespace":"kube-system","uid":"6449e620-061f-4655-afcf-14ebbaf6aa44","resourceVersion":"563","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"3152d1fa7c8023abfe4b7a043966ca79","kubernetes.io/config.mirror":"3152d1fa7c8023abfe4b7a043966ca79","kubernetes.io/config.seen":"2024-09-15T06:52:56.549631710Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8535 chars]
	I0915 06:54:12.560010    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.560010    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.560087    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.560087    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.565812    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.565812    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.565812    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.565812    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.565812    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.565812    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.565812    7988 round_trippers.go:580]     Audit-Id: 5c8cb9ed-d0ca-4881-bb7b-aa9352c8cd88
	I0915 06:54:12.565812    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.565812    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.565812    7988 pod_ready.go:93] pod "kube-apiserver-functional-804700" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:12.565812    7988 pod_ready.go:82] duration metric: took 13.1384ms for pod "kube-apiserver-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.566459    7988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.566599    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804700
	I0915 06:54:12.566629    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.566629    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.566629    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.572086    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.572086    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.572086    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.572086    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.572086    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.572086    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.572086    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.572086    7988 round_trippers.go:580]     Audit-Id: 77b9e1d3-e4ed-4c25-839d-51b8e5c6da1b
	I0915 06:54:12.572086    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-804700","namespace":"kube-system","uid":"98f3184e-a432-482b-9b9d-4541819f8cd5","resourceVersion":"565","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"37583227df99c44291498d4af7cde4d2","kubernetes.io/config.mirror":"37583227df99c44291498d4af7cde4d2","kubernetes.io/config.seen":"2024-09-15T06:52:56.549633610Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8110 chars]
	I0915 06:54:12.573129    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.573213    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.573213    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.573244    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.579036    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.579036    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.579036    7988 round_trippers.go:580]     Audit-Id: a8c636e4-df7b-4c42-bae5-f62dc302be2e
	I0915 06:54:12.579036    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.579036    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.579036    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.579036    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.579036    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.579036    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.579927    7988 pod_ready.go:93] pod "kube-controller-manager-functional-804700" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:12.579927    7988 pod_ready.go:82] duration metric: took 13.4681ms for pod "kube-controller-manager-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.579927    7988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m29mz" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.580721    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-proxy-m29mz
	I0915 06:54:12.580721    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.580721    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.580721    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.585343    7988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 06:54:12.585343    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.585343    7988 round_trippers.go:580]     Audit-Id: ed6f7348-c9f7-48a2-88c8-8e1c8fc27e5f
	I0915 06:54:12.585343    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.585343    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.585343    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.585343    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.585343    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.585343    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-m29mz","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9476bf2-5f0f-4431-ac1b-83e3e926b334","resourceVersion":"476","creationTimestamp":"2024-09-15T06:53:01Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4e95e6d6-32be-41a5-90c5-d5e4a851aeae","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e95e6d6-32be-41a5-90c5-d5e4a851aeae\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6191 chars]
	I0915 06:54:12.586502    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.586551    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.586599    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.586599    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.593437    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:12.593437    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.593437    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.593437    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.593437    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.593437    7988 round_trippers.go:580]     Audit-Id: b88c3912-5d16-4d92-8630-1a3139ef01dd
	I0915 06:54:12.593437    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.593437    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.593437    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:12.593437    7988 pod_ready.go:93] pod "kube-proxy-m29mz" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:12.594557    7988 pod_ready.go:82] duration metric: took 14.6297ms for pod "kube-proxy-m29mz" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.594557    7988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:12.594668    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700
	I0915 06:54:12.594713    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.594713    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.594713    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.598823    7988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0915 06:54:12.598823    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.598823    7988 round_trippers.go:580]     Audit-Id: eff6c4de-51a7-440d-a85a-a9f2f98da331
	I0915 06:54:12.598823    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.598823    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.598823    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.598823    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.598823    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.598823    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804700","namespace":"kube-system","uid":"4762eaf3-c04b-48bc-813c-7a85d8edf5a8","resourceVersion":"480","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.mirror":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.seen":"2024-09-15T06:52:56.549635011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0915 06:54:12.599878    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:12.599878    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:12.599878    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:12.599878    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:12.605704    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:12.605704    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:12.605704    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:12.605704    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:12.605704    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:12.605704    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:12.605704    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:12 GMT
	I0915 06:54:12.605704    7988 round_trippers.go:580]     Audit-Id: b37fc49f-1ca1-4447-a9c6-f0d0d97565e3
	I0915 06:54:12.605704    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:13.095416    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700
	I0915 06:54:13.095416    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:13.095416    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:13.095416    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:13.102184    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:13.102305    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:13.102305    7988 round_trippers.go:580]     Audit-Id: db40f34c-c7c9-4e3b-bc8e-67207fe1f6e4
	I0915 06:54:13.102305    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:13.102305    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:13.102305    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:13.102384    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:13.102384    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:13 GMT
	I0915 06:54:13.102528    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804700","namespace":"kube-system","uid":"4762eaf3-c04b-48bc-813c-7a85d8edf5a8","resourceVersion":"480","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.mirror":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.seen":"2024-09-15T06:52:56.549635011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0915 06:54:13.103566    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:13.103566    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:13.103566    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:13.103566    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:13.110805    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:13.111339    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:13.111339    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:13.111339    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:13.111339    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:13.111339    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:13 GMT
	I0915 06:54:13.111339    7988 round_trippers.go:580]     Audit-Id: eb92d333-5ae3-40ef-a60e-40e85d23ea68
	I0915 06:54:13.111339    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:13.111620    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:13.595540    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700
	I0915 06:54:13.595540    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:13.595540    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:13.595540    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:13.601163    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:13.601163    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:13.601163    7988 round_trippers.go:580]     Audit-Id: c41688d5-6ef5-42fa-ae73-81228adf8f09
	I0915 06:54:13.601163    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:13.601163    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:13.601163    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:13.601163    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:13.601706    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:13 GMT
	I0915 06:54:13.601861    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804700","namespace":"kube-system","uid":"4762eaf3-c04b-48bc-813c-7a85d8edf5a8","resourceVersion":"480","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.mirror":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.seen":"2024-09-15T06:52:56.549635011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0915 06:54:13.602204    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:13.602204    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:13.602204    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:13.602204    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:13.608734    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:13.609262    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:13.609262    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:13.609262    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:13.609262    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:13.609262    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:13 GMT
	I0915 06:54:13.609262    7988 round_trippers.go:580]     Audit-Id: acf27811-43e8-4b85-a89c-2b24d4e716c5
	I0915 06:54:13.609262    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:13.609639    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:14.095276    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700
	I0915 06:54:14.095276    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.095276    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.095276    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.101879    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:14.101879    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.101981    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.101981    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.101981    7988 round_trippers.go:580]     Audit-Id: 38b489e2-4e99-42e0-9939-61f5aa05f4c8
	I0915 06:54:14.101981    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.101981    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.101981    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.102538    7988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-804700","namespace":"kube-system","uid":"4762eaf3-c04b-48bc-813c-7a85d8edf5a8","resourceVersion":"575","creationTimestamp":"2024-09-15T06:52:56Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.mirror":"8d79b44b78cc4b6b9d1e74a3adc1e7e1","kubernetes.io/config.seen":"2024-09-15T06:52:56.549635011Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:52:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0915 06:54:14.103381    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes/functional-804700
	I0915 06:54:14.103500    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.103500    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.103500    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.108880    7988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0915 06:54:14.108880    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.108880    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.108880    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.108880    7988 round_trippers.go:580]     Audit-Id: 899d3a74-f5de-45da-9976-ae54a0da79a2
	I0915 06:54:14.108880    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.108880    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.108880    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.109543    7988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-15T06:52:52Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0915 06:54:14.109690    7988 pod_ready.go:93] pod "kube-scheduler-functional-804700" in "kube-system" namespace has status "Ready":"True"
	I0915 06:54:14.109690    7988 pod_ready.go:82] duration metric: took 1.5151201s for pod "kube-scheduler-functional-804700" in "kube-system" namespace to be "Ready" ...
	I0915 06:54:14.109690    7988 pod_ready.go:39] duration metric: took 12.0675849s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:54:14.109690    7988 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:54:14.120475    7988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:54:14.161743    7988 command_runner.go:130] > 5995
	I0915 06:54:14.161816    7988 api_server.go:72] duration metric: took 20.5680975s to wait for apiserver process to appear ...
	I0915 06:54:14.161816    7988 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:54:14.161962    7988 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:49870/healthz ...
	I0915 06:54:14.173898    7988 api_server.go:279] https://127.0.0.1:49870/healthz returned 200:
	ok
	I0915 06:54:14.174844    7988 round_trippers.go:463] GET https://127.0.0.1:49870/version
	I0915 06:54:14.174844    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.174844    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.174844    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.178407    7988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0915 06:54:14.178407    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.178466    7988 round_trippers.go:580]     Audit-Id: 584b05c7-044b-4907-a71d-9a556d6b98f9
	I0915 06:54:14.178466    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.178466    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.178466    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.178466    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.178510    7988 round_trippers.go:580]     Content-Length: 263
	I0915 06:54:14.178510    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.178510    7988 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0915 06:54:14.178746    7988 api_server.go:141] control plane version: v1.31.1
	I0915 06:54:14.178746    7988 api_server.go:131] duration metric: took 16.9301ms to wait for apiserver health ...
	I0915 06:54:14.178794    7988 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:54:14.178937    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods
	I0915 06:54:14.178937    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.179002    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.179002    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.186718    7988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0915 06:54:14.186718    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.186803    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.186803    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.186803    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.186803    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.186803    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.186803    7988 round_trippers.go:580]     Audit-Id: 5a62ae0c-4ae1-4cfe-80f1-6c3ca12ab3c7
	I0915 06:54:14.190123    7988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8m5z","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"974817d7-07c9-4087-9ebc-2ad96b730334","resourceVersion":"566","creationTimestamp":"2024-09-15T06:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"f049e883-50a7-43be-8441-d3e2d1888fa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f049e883-50a7-43be-8441-d3e2d1888fa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52044 chars]
	I0915 06:54:14.192834    7988 system_pods.go:59] 7 kube-system pods found
	I0915 06:54:14.192834    7988 system_pods.go:61] "coredns-7c65d6cfc9-j8m5z" [974817d7-07c9-4087-9ebc-2ad96b730334] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "etcd-functional-804700" [b893c2e4-5138-4a9d-b5e0-6c0a3472a010] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "kube-apiserver-functional-804700" [6449e620-061f-4655-afcf-14ebbaf6aa44] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "kube-controller-manager-functional-804700" [98f3184e-a432-482b-9b9d-4541819f8cd5] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "kube-proxy-m29mz" [e9476bf2-5f0f-4431-ac1b-83e3e926b334] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "kube-scheduler-functional-804700" [4762eaf3-c04b-48bc-813c-7a85d8edf5a8] Running
	I0915 06:54:14.192834    7988 system_pods.go:61] "storage-provisioner" [6bdedbb5-d79b-4900-b85f-af6c118ddab0] Running
	I0915 06:54:14.192834    7988 system_pods.go:74] duration metric: took 14.0392ms to wait for pod list to return data ...
	I0915 06:54:14.192834    7988 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:54:14.192834    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/default/serviceaccounts
	I0915 06:54:14.192834    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.192834    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.192834    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.199426    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:14.199426    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.199426    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.199481    7988 round_trippers.go:580]     Content-Length: 261
	I0915 06:54:14.199481    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.199481    7988 round_trippers.go:580]     Audit-Id: a44b6641-1d57-4c26-80b8-60d92324fc78
	I0915 06:54:14.199481    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.199481    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.199515    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.199515    7988 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2ce54ba5-703f-4893-878a-32f4a5301675","resourceVersion":"340","creationTimestamp":"2024-09-15T06:53:00Z"}}]}
	I0915 06:54:14.199605    7988 default_sa.go:45] found service account: "default"
	I0915 06:54:14.199605    7988 default_sa.go:55] duration metric: took 6.7709ms for default service account to be created ...
	I0915 06:54:14.199605    7988 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:54:14.340635    7988 request.go:632] Waited for 141.0292ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods
	I0915 06:54:14.340635    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/namespaces/kube-system/pods
	I0915 06:54:14.340635    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.340635    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.340635    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.346830    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:14.346830    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.346830    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.346830    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.346926    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.346926    7988 round_trippers.go:580]     Audit-Id: 50f7e5b6-9b15-4dd1-b4bf-c455bf5ee5a4
	I0915 06:54:14.346926    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.346926    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.347875    7988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-j8m5z","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"974817d7-07c9-4087-9ebc-2ad96b730334","resourceVersion":"566","creationTimestamp":"2024-09-15T06:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"f049e883-50a7-43be-8441-d3e2d1888fa6","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-15T06:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f049e883-50a7-43be-8441-d3e2d1888fa6\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52044 chars]
	I0915 06:54:14.351310    7988 system_pods.go:86] 7 kube-system pods found
	I0915 06:54:14.351310    7988 system_pods.go:89] "coredns-7c65d6cfc9-j8m5z" [974817d7-07c9-4087-9ebc-2ad96b730334] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "etcd-functional-804700" [b893c2e4-5138-4a9d-b5e0-6c0a3472a010] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "kube-apiserver-functional-804700" [6449e620-061f-4655-afcf-14ebbaf6aa44] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "kube-controller-manager-functional-804700" [98f3184e-a432-482b-9b9d-4541819f8cd5] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "kube-proxy-m29mz" [e9476bf2-5f0f-4431-ac1b-83e3e926b334] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "kube-scheduler-functional-804700" [4762eaf3-c04b-48bc-813c-7a85d8edf5a8] Running
	I0915 06:54:14.351310    7988 system_pods.go:89] "storage-provisioner" [6bdedbb5-d79b-4900-b85f-af6c118ddab0] Running
	I0915 06:54:14.351310    7988 system_pods.go:126] duration metric: took 151.704ms to wait for k8s-apps to be running ...
	I0915 06:54:14.351310    7988 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:54:14.362994    7988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:54:14.388438    7988 system_svc.go:56] duration metric: took 37.1275ms WaitForService to wait for kubelet
	I0915 06:54:14.388478    7988 kubeadm.go:582] duration metric: took 20.7947577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:54:14.388478    7988 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:54:14.539795    7988 request.go:632] Waited for 151.191ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:49870/api/v1/nodes
	I0915 06:54:14.539795    7988 round_trippers.go:463] GET https://127.0.0.1:49870/api/v1/nodes
	I0915 06:54:14.539795    7988 round_trippers.go:469] Request Headers:
	I0915 06:54:14.539795    7988 round_trippers.go:473]     Accept: application/json, */*
	I0915 06:54:14.539795    7988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0915 06:54:14.546716    7988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0915 06:54:14.546774    7988 round_trippers.go:577] Response Headers:
	I0915 06:54:14.546827    7988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c4b4a4a9-49fe-400a-9364-ad528023ba9e
	I0915 06:54:14.546827    7988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 55da4a84-5c8f-47f3-8c83-226170be5eb3
	I0915 06:54:14.546868    7988 round_trippers.go:580]     Date: Sun, 15 Sep 2024 06:54:14 GMT
	I0915 06:54:14.546868    7988 round_trippers.go:580]     Audit-Id: e2bcc17b-18fc-406b-8c91-0702cd485d8b
	I0915 06:54:14.546868    7988 round_trippers.go:580]     Cache-Control: no-cache, private
	I0915 06:54:14.546868    7988 round_trippers.go:580]     Content-Type: application/json
	I0915 06:54:14.547047    7988 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"functional-804700","uid":"079aa650-ac6b-4c1b-a4ca-5daf9d7a218d","resourceVersion":"428","creationTimestamp":"2024-09-15T06:52:52Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-804700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7a3ca67a20528f5dabbb456e8e4ce542b58ef23a","minikube.k8s.io/name":"functional-804700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_15T06_52_57_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0915 06:54:14.547206    7988 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0915 06:54:14.547206    7988 node_conditions.go:123] node cpu capacity is 16
	I0915 06:54:14.547206    7988 node_conditions.go:105] duration metric: took 158.6639ms to run NodePressure ...
	I0915 06:54:14.547206    7988 start.go:241] waiting for startup goroutines ...
	I0915 06:54:14.547206    7988 start.go:246] waiting for cluster config update ...
	I0915 06:54:14.547206    7988 start.go:255] writing updated cluster config ...
	I0915 06:54:14.559758    7988 ssh_runner.go:195] Run: rm -f paused
	I0915 06:54:14.708529    7988 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:54:14.713878    7988 out.go:177] * Done! kubectl is now configured to use "functional-804700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 15 06:53:50 functional-804700 cri-dockerd[4956]: time="2024-09-15T06:53:50Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 15 06:53:50 functional-804700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 15 06:53:50 functional-804700 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 15 06:53:50 functional-804700 systemd[1]: cri-docker.service: Deactivated successfully.
	Sep 15 06:53:50 functional-804700 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 15 06:53:50 functional-804700 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 15 06:53:50 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:50Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 15 06:53:50 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 15 06:53:50 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:50Z" level=info msg="Start docker client with request timeout 0s"
	Sep 15 06:53:50 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Loaded network plugin cni"
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 15 06:53:51 functional-804700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 15 06:53:51 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:51Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-j8m5z_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"aa3c6038f750b37a19d64a139d22762b62c1ba3de7b95b6f7a9cb63a4f0c2c05\""
	Sep 15 06:53:54 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fc486e028409fb74d4daa930d62f02c1f03f33ee31f648bca643c83d11adffea/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:55 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/14e41a59f6e702a1fdcde7c5f0653b7288f344b937e198e4f2a902c34c62de04/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:55 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3223e31ccc6386691c5561bbab9f1a47596558aa688628887de74fe07c055796/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:55 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7315d7f276d61e995b34d54eafade6397dca5795948894f968bedf9de6ca193/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:56 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4347504a5661dee933bd1067fb6977625830087eac0c1881ece53afcfafaa6f7/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:56 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d8cd3466f7f6d2ce8c8fe648a0f7d0e0fdfff4be128b84c63d8a292d189424e5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 15 06:53:56 functional-804700 cri-dockerd[5059]: time="2024-09-15T06:53:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04ec9a31dd6ad42e7ee5839a0e45450551078d64609f95a97bbfeed46fed3cda/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	46c7c909f5308       c69fa2e9cbf5f       37 seconds ago       Running             coredns                   1                   04ec9a31dd6ad       coredns-7c65d6cfc9-j8m5z
	4f014e75099d0       6e38f40d628db       38 seconds ago       Running             storage-provisioner       2                   d8cd3466f7f6d       storage-provisioner
	6bafa96bcdb7a       2e96e5913fc06       38 seconds ago       Running             etcd                      1                   4347504a5661d       etcd-functional-804700
	43e80af2a9db8       60c005f310ff3       39 seconds ago       Running             kube-proxy                1                   d7315d7f276d6       kube-proxy-m29mz
	c834c7a170f97       6bab7719df100       39 seconds ago       Running             kube-apiserver            1                   3223e31ccc638       kube-apiserver-functional-804700
	7e62e173bd661       175ffd71cce3d       39 seconds ago       Running             kube-controller-manager   1                   14e41a59f6e70       kube-controller-manager-functional-804700
	539727d0cc884       9aa1fad941575       40 seconds ago       Running             kube-scheduler            1                   fc486e028409f       kube-scheduler-functional-804700
	43fd415472d8a       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   9bef4ff9683d3       storage-provisioner
	e32eb09edafba       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   aa3c6038f750b       coredns-7c65d6cfc9-j8m5z
	ff65f79b26802       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   557396d9f8aec       kube-scheduler-functional-804700
	
	
	==> coredns [46c7c909f530] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48642 - 46783 "HINFO IN 6699656347649767061.6733813694596705260. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.099518327s
	
	
	==> coredns [e32eb09edafb] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1571173654]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 06:53:05.545) (total time: 21046ms):
	Trace[1571173654]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21045ms (06:53:26.587)
	Trace[1571173654]: [21.046363855s] [21.046363855s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1650882617]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 06:53:05.545) (total time: 21046ms):
	Trace[1650882617]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21045ms (06:53:26.587)
	Trace[1650882617]: [21.046753702s] [21.046753702s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1159457205]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (15-Sep-2024 06:53:05.545) (total time: 21046ms):
	Trace[1159457205]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21046ms (06:53:26.588)
	Trace[1159457205]: [21.046727699s] [21.046727699s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-804700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-804700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=functional-804700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_52_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:52:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-804700
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:54:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:54:29 +0000   Sun, 15 Sep 2024 06:52:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:54:29 +0000   Sun, 15 Sep 2024 06:52:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:54:29 +0000   Sun, 15 Sep 2024 06:52:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:54:29 +0000   Sun, 15 Sep 2024 06:52:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-804700
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 92cbeaf1a5fe444090c9b7a7260dec79
	  System UUID:                92cbeaf1a5fe444090c9b7a7260dec79
	  Boot ID:                    c1102496-7d49-4e83-b615-37466f69e894
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-j8m5z                     100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     93s
	  kube-system                 etcd-functional-804700                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         101s
	  kube-system                 kube-apiserver-functional-804700             250m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-functional-804700    200m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-m29mz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-functional-804700             100m (0%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age                  From             Message
	  ----     ------                             ----                 ----             -------
	  Normal   Starting                           88s                  kube-proxy       
	  Normal   Starting                           31s                  kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  108s                 kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           108s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                           108s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure              107s (x7 over 107s)  kubelet          Node functional-804700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               107s (x7 over 107s)  kubelet          Node functional-804700 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory            107s (x7 over 107s)  kubelet          Node functional-804700 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced            98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                           98s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                           98s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Warning  PossibleMemoryBackedVolumesOnDisk  98s                  kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   NodeHasSufficientMemory            97s                  kubelet          Node functional-804700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              97s                  kubelet          Node functional-804700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               97s                  kubelet          Node functional-804700 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     94s                  node-controller  Node functional-804700 event: Registered Node functional-804700 in Controller
	  Normal   NodeNotReady                       46s                  kubelet          Node functional-804700 status is now: NodeNotReady
	  Normal   RegisteredNode                     29s                  node-controller  Node functional-804700 event: Registered Node functional-804700 in Controller
	
	
	==> dmesg <==
	[  +0.522980] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +1.080299] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002716] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002659] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.004466] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000004]  failed 2
	[  +0.017869] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001888] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003623] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002394] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.080494] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.099908] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.921474] netlink: 'init': attribute type 4 has an invalid length.
	[Sep15 05:15] tmpfs: Unknown parameter 'noswap'
	[  +9.562128] tmpfs: Unknown parameter 'noswap'
	[Sep15 06:33] tmpfs: Unknown parameter 'noswap'
	[  +9.861380] tmpfs: Unknown parameter 'noswap'
	[Sep15 06:51] tmpfs: Unknown parameter 'noswap'
	[  +9.954100] tmpfs: Unknown parameter 'noswap'
	[ +14.888874] tmpfs: Unknown parameter 'noswap'
	[Sep15 06:52] tmpfs: Unknown parameter 'noswap'
	[  +9.511178] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [6bafa96bcdb7] <==
	{"level":"info","ts":"2024-09-15T06:53:59.957652Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-804700 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:53:59.957737Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:53:59.957914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:53:59.958573Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:53:59.958684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-15T06:53:59.959434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:53:59.959492Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:53:59.961576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:53:59.962313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:54:02.237393Z","caller":"traceutil/trace.go:171","msg":"trace[719199616] linearizableReadLoop","detail":"{readStateIndex:486; appliedIndex:485; }","duration":"102.675425ms","start":"2024-09-15T06:54:02.134698Z","end":"2024-09-15T06:54:02.237374Z","steps":["trace[719199616] 'read index received'  (duration: 102.472599ms)","trace[719199616] 'applied index is now lower than readState.Index'  (duration: 201.325µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:54:02.237492Z","caller":"traceutil/trace.go:171","msg":"trace[935028528] transaction","detail":"{read_only:false; number_of_response:1; response_revision:465; }","duration":"103.023969ms","start":"2024-09-15T06:54:02.134455Z","end":"2024-09-15T06:54:02.237479Z","steps":["trace[935028528] 'process raft request'  (duration: 102.589114ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:54:02.238125Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.369012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-j8m5z\" ","response":"range_response_count:1 size:4917"}
	{"level":"info","ts":"2024-09-15T06:54:02.238295Z","caller":"traceutil/trace.go:171","msg":"trace[1143766142] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-j8m5z; range_end:; response_count:1; response_revision:465; }","duration":"103.558336ms","start":"2024-09-15T06:54:02.134694Z","end":"2024-09-15T06:54:02.238252Z","steps":["trace[1143766142] 'agreement among raft nodes before linearized reading'  (duration: 102.826244ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:54:02.241952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.174291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:3993"}
	{"level":"info","ts":"2024-09-15T06:54:02.243284Z","caller":"traceutil/trace.go:171","msg":"trace[275155560] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:466; }","duration":"108.51366ms","start":"2024-09-15T06:54:02.134755Z","end":"2024-09-15T06:54:02.243268Z","steps":["trace[275155560] 'agreement among raft nodes before linearized reading'  (duration: 107.031373ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:54:02.243480Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.815446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-804700\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-09-15T06:54:02.243530Z","caller":"traceutil/trace.go:171","msg":"trace[319975690] range","detail":"{range_begin:/registry/minions/functional-804700; range_end:; response_count:1; response_revision:466; }","duration":"106.866852ms","start":"2024-09-15T06:54:02.136649Z","end":"2024-09-15T06:54:02.243516Z","steps":["trace[319975690] 'agreement among raft nodes before linearized reading'  (duration: 106.697431ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:54:02.534513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.158011ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-804700\" ","response":"range_response_count:1 size:4443"}
	{"level":"warn","ts":"2024-09-15T06:54:02.534619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.288228ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-09-15T06:54:02.534697Z","caller":"traceutil/trace.go:171","msg":"trace[1887705328] range","detail":"{range_begin:/registry/minions/functional-804700; range_end:; response_count:1; response_revision:467; }","duration":"104.30573ms","start":"2024-09-15T06:54:02.430331Z","end":"2024-09-15T06:54:02.534637Z","steps":["trace[1887705328] 'range keys from in-memory index tree'  (duration: 104.079502ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:54:02.534711Z","caller":"traceutil/trace.go:171","msg":"trace[467965334] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:467; }","duration":"104.382739ms","start":"2024-09-15T06:54:02.430314Z","end":"2024-09-15T06:54:02.534697Z","steps":["trace[467965334] 'range keys from in-memory index tree'  (duration: 104.15341ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-15T06:54:02.542643Z","caller":"traceutil/trace.go:171","msg":"trace[736864446] linearizableReadLoop","detail":"{readStateIndex:489; appliedIndex:488; }","duration":"103.324806ms","start":"2024-09-15T06:54:02.439306Z","end":"2024-09-15T06:54:02.542631Z","steps":["trace[736864446] 'read index received'  (duration: 102.843246ms)","trace[736864446] 'applied index is now lower than readState.Index'  (duration: 480.06µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-15T06:54:02.542989Z","caller":"traceutil/trace.go:171","msg":"trace[2053983772] transaction","detail":"{read_only:false; response_revision:468; number_of_response:1; }","duration":"104.196817ms","start":"2024-09-15T06:54:02.438779Z","end":"2024-09-15T06:54:02.542976Z","steps":["trace[2053983772] 'process raft request'  (duration: 103.522332ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-15T06:54:02.543802Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.486853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-15T06:54:02.543971Z","caller":"traceutil/trace.go:171","msg":"trace[845022537] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:468; }","duration":"104.656674ms","start":"2024-09-15T06:54:02.439301Z","end":"2024-09-15T06:54:02.543957Z","steps":["trace[845022537] 'agreement among raft nodes before linearized reading'  (duration: 104.471351ms)"],"step_count":1}
	
	
	==> kernel <==
	 06:54:34 up  1:56,  0 users,  load average: 1.73, 1.51, 1.26
	Linux functional-804700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c834c7a170f9] <==
	I0915 06:54:01.931356       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0915 06:54:01.931369       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0915 06:54:01.931394       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0915 06:54:01.931400       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0915 06:54:02.134305       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0915 06:54:02.134385       1 aggregator.go:171] initial CRD sync complete...
	I0915 06:54:02.134404       1 autoregister_controller.go:144] Starting autoregister controller
	I0915 06:54:02.134414       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0915 06:54:02.134422       1 cache.go:39] Caches are synced for autoregister controller
	I0915 06:54:02.229755       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0915 06:54:02.229793       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0915 06:54:02.229813       1 policy_source.go:224] refreshing policies
	I0915 06:54:02.229917       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0915 06:54:02.229770       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0915 06:54:02.230226       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0915 06:54:02.230523       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0915 06:54:02.230543       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0915 06:54:02.230540       1 shared_informer.go:320] Caches are synced for configmaps
	I0915 06:54:02.230682       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0915 06:54:02.233700       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0915 06:54:02.432610       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0915 06:54:02.544627       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0915 06:54:02.936020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0915 06:54:05.494766       1 controller.go:615] quota admission added evaluator for: endpoints
	I0915 06:54:05.697799       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7e62e173bd66] <==
	I0915 06:54:05.350448       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0915 06:54:05.350459       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0915 06:54:05.350461       1 shared_informer.go:320] Caches are synced for daemon sets
	I0915 06:54:05.350466       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0915 06:54:05.350529       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-804700"
	I0915 06:54:05.358315       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0915 06:54:05.361387       1 shared_informer.go:320] Caches are synced for taint
	I0915 06:54:05.361597       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0915 06:54:05.361677       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-804700"
	I0915 06:54:05.361729       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0915 06:54:05.369345       1 shared_informer.go:320] Caches are synced for crt configmap
	I0915 06:54:05.373378       1 shared_informer.go:320] Caches are synced for GC
	I0915 06:54:05.389577       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0915 06:54:05.391314       1 shared_informer.go:320] Caches are synced for attach detach
	I0915 06:54:05.391683       1 shared_informer.go:320] Caches are synced for TTL
	I0915 06:54:05.495806       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:54:05.500383       1 shared_informer.go:320] Caches are synced for resource quota
	I0915 06:54:05.662512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="385.552472ms"
	I0915 06:54:05.662858       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="115.715µs"
	I0915 06:54:05.906676       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:54:05.989753       1 shared_informer.go:320] Caches are synced for garbage collector
	I0915 06:54:05.989862       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0915 06:54:10.263211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="25.764302ms"
	I0915 06:54:10.263434       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.108µs"
	I0915 06:54:29.056728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-804700"
	
	
	==> kube-proxy [43e80af2a9db] <==
	E0915 06:53:58.366485       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0915 06:53:58.387974       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0915 06:53:58.536520       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:54:02.245853       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:54:02.246014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:54:02.656198       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:54:02.656555       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:54:02.733212       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0915 06:54:02.762426       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0915 06:54:02.785695       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0915 06:54:02.785951       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:54:02.785965       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:54:02.831518       1 config.go:199] "Starting service config controller"
	I0915 06:54:02.831694       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:54:02.831739       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:54:02.831859       1 config.go:328] "Starting node config controller"
	I0915 06:54:02.831878       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:54:02.833736       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:54:02.932256       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:54:02.932849       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0915 06:54:02.935227       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [539727d0cc88] <==
	I0915 06:53:59.249851       1 serving.go:386] Generated self-signed cert in-memory
	W0915 06:54:02.131179       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0915 06:54:02.131252       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0915 06:54:02.131276       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0915 06:54:02.131287       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0915 06:54:02.239856       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0915 06:54:02.239973       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:54:02.245897       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0915 06:54:02.246192       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0915 06:54:02.246230       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:54:02.246429       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0915 06:54:02.530602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ff65f79b2680] <==
	E0915 06:52:53.721002       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:52:53.769837       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:52:53.769889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:53.908840       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0915 06:52:53.908991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:53.913414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0915 06:52:53.913458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:53.974753       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0915 06:52:53.974835       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:52:53.974848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0915 06:52:53.974874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:53.978454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:52:53.978557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:54.053328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0915 06:52:54.053458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:54.113177       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0915 06:52:54.113323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:54.115277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0915 06:52:54.115472       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:52:54.181436       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:52:54.181546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0915 06:52:56.361057       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0915 06:53:37.842354       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0915 06:53:37.842653       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0915 06:53:37.842839       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 15 06:53:53 functional-804700 kubelet[2568]: I0915 06:53:53.952925    2568 status_manager.go:851] "Failed to get status for pod" podUID="6bdedbb5-d79b-4900-b85f-af6c118ddab0" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:53 functional-804700 kubelet[2568]: I0915 06:53:53.953307    2568 status_manager.go:851] "Failed to get status for pod" podUID="9e1ab0e3861a4926544ba1c53916645b" pod="kube-system/etcd-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:53 functional-804700 kubelet[2568]: I0915 06:53:53.953914    2568 status_manager.go:851] "Failed to get status for pod" podUID="3152d1fa7c8023abfe4b7a043966ca79" pod="kube-system/kube-apiserver-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:53 functional-804700 kubelet[2568]: I0915 06:53:53.954483    2568 status_manager.go:851] "Failed to get status for pod" podUID="8d79b44b78cc4b6b9d1e74a3adc1e7e1" pod="kube-system/kube-scheduler-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:53 functional-804700 kubelet[2568]: I0915 06:53:53.955436    2568 status_manager.go:851] "Failed to get status for pod" podUID="37583227df99c44291498d4af7cde4d2" pod="kube-system/kube-controller-manager-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:54 functional-804700 kubelet[2568]: E0915 06:53:54.355737    2568 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-804700?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="6.4s"
	Sep 15 06:53:54 functional-804700 kubelet[2568]: I0915 06:53:54.434206    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4347504a5661dee933bd1067fb6977625830087eac0c1881ece53afcfafaa6f7"
	Sep 15 06:53:54 functional-804700 kubelet[2568]: I0915 06:53:54.449130    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04ec9a31dd6ad42e7ee5839a0e45450551078d64609f95a97bbfeed46fed3cda"
	Sep 15 06:53:55 functional-804700 kubelet[2568]: E0915 06:53:55.599955    2568 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-functional-804700.17f55863d92f4b05  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-functional-804700,UID:8d79b44b78cc4b6b9d1e74a3adc1e7e1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10259/healthz\": dial tcp 127.0.0.1:10259: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-804700,},FirstTimestamp:2024-09-15 06:53:38.035215109 +0000 UTC m=+41.790108286,LastTimestamp:2024-09-15 06:53:38.035215109 +0000 UTC m=+41.790108286,C
ount:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-804700,}"
	Sep 15 06:53:55 functional-804700 kubelet[2568]: I0915 06:53:55.948600    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7315d7f276d61e995b34d54eafade6397dca5795948894f968bedf9de6ca193"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.142754    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8cd3466f7f6d2ce8c8fe648a0f7d0e0fdfff4be128b84c63d8a292d189424e5"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.157094    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3223e31ccc6386691c5561bbab9f1a47596558aa688628887de74fe07c055796"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.440781    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc486e028409fb74d4daa930d62f02c1f03f33ee31f648bca643c83d11adffea"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.632756    2568 scope.go:117] "RemoveContainer" containerID="7d7253371e55f81e9f79f1b4064e1c02a80d9d6251a11e796441162b3f2eaa4a"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.635001    2568 status_manager.go:851] "Failed to get status for pod" podUID="37583227df99c44291498d4af7cde4d2" pod="kube-system/kube-controller-manager-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.635981    2568 status_manager.go:851] "Failed to get status for pod" podUID="e9476bf2-5f0f-4431-ac1b-83e3e926b334" pod="kube-system/kube-proxy-m29mz" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-m29mz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.636661    2568 status_manager.go:851] "Failed to get status for pod" podUID="974817d7-07c9-4087-9ebc-2ad96b730334" pod="kube-system/coredns-7c65d6cfc9-j8m5z" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-j8m5z\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.637960    2568 status_manager.go:851] "Failed to get status for pod" podUID="6bdedbb5-d79b-4900-b85f-af6c118ddab0" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.638682    2568 status_manager.go:851] "Failed to get status for pod" podUID="9e1ab0e3861a4926544ba1c53916645b" pod="kube-system/etcd-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.639649    2568 status_manager.go:851] "Failed to get status for pod" podUID="3152d1fa7c8023abfe4b7a043966ca79" pod="kube-system/kube-apiserver-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.640141    2568 status_manager.go:851] "Failed to get status for pod" podUID="8d79b44b78cc4b6b9d1e74a3adc1e7e1" pod="kube-system/kube-scheduler-functional-804700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-804700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 15 06:53:56 functional-804700 kubelet[2568]: I0915 06:53:56.941999    2568 scope.go:117] "RemoveContainer" containerID="406f1a2720d6682dc0ad56113bb998a7f39c14c01d3f74ba394ce87a40abc36b"
	Sep 15 06:53:57 functional-804700 kubelet[2568]: I0915 06:53:57.430816    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e41a59f6e702a1fdcde7c5f0653b7288f344b937e198e4f2a902c34c62de04"
	Sep 15 06:53:57 functional-804700 kubelet[2568]: I0915 06:53:57.430909    2568 scope.go:117] "RemoveContainer" containerID="8b3df26d37bd114d6ce9053f8daabba900c8e02712d728a1e0379eb249c1e62a"
	Sep 15 06:53:57 functional-804700 kubelet[2568]: I0915 06:53:57.731566    2568 scope.go:117] "RemoveContainer" containerID="c7959adf8280cf1c1d96aaf386a0c8201e71c31f37c634d6d430bd8fe0fe9e18"
	
	
	==> storage-provisioner [43fd415472d8] <==
	I0915 06:53:28.312275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:53:28.345513       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:53:28.345665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:53:28.359143       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:53:28.359350       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bab84ce5-f432-42b1-8346-af654e8b75e5", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-804700_f21a1c60-1db4-485b-959d-bff1acf28c32 became leader
	I0915 06:53:28.359515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-804700_f21a1c60-1db4-485b-959d-bff1acf28c32!
	I0915 06:53:28.461171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-804700_f21a1c60-1db4-485b-959d-bff1acf28c32!
	
	
	==> storage-provisioner [4f014e75099d] <==
	I0915 06:53:58.141129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:54:02.345031       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:54:02.345212       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:54:19.960711       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:54:19.960942       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bab84ce5-f432-42b1-8346-af654e8b75e5", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-804700_2eaa15cb-85b3-47d4-954e-48a7dc8f9027 became leader
	I0915 06:54:19.961137       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-804700_2eaa15cb-85b3-47d4-954e-48a7dc8f9027!
	I0915 06:54:20.061773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-804700_2eaa15cb-85b3-47d4-954e-48a7dc8f9027!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-804700 -n functional-804700
helpers_test.go:261: (dbg) Run:  kubectl --context functional-804700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.18s)

                                                
                                    

Test pass (313/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.56
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 1.96
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.7
12 TestDownloadOnly/v1.31.1/json-events 7.37
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.32
18 TestDownloadOnly/v1.31.1/DeleteAll 1.24
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.71
20 TestDownloadOnlyKic 3.23
21 TestBinaryMirror 2.98
22 TestOffline 148.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.26
27 TestAddons/Setup 563.83
29 TestAddons/serial/Volcano 55.46
31 TestAddons/serial/GCPAuth/Namespaces 0.37
35 TestAddons/parallel/InspektorGadget 15.15
36 TestAddons/parallel/MetricsServer 6.68
37 TestAddons/parallel/HelmTiller 16.19
39 TestAddons/parallel/CSI 63.53
40 TestAddons/parallel/Headlamp 30.59
41 TestAddons/parallel/CloudSpanner 7.49
42 TestAddons/parallel/LocalPath 15.14
43 TestAddons/parallel/NvidiaDevicePlugin 7.51
44 TestAddons/parallel/Yakd 12.61
45 TestAddons/StoppedEnableDisable 13.82
46 TestCertOptions 75.47
47 TestCertExpiration 298.57
48 TestDockerFlags 74.28
49 TestForceSystemdFlag 113.82
50 TestForceSystemdEnv 96.73
57 TestErrorSpam/start 4.02
58 TestErrorSpam/status 2.93
59 TestErrorSpam/pause 3.41
60 TestErrorSpam/unpause 3.64
61 TestErrorSpam/stop 14.89
64 TestFunctional/serial/CopySyncFile 0.03
65 TestFunctional/serial/StartWithProxy 93.48
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 43.07
68 TestFunctional/serial/KubeContext 0.14
69 TestFunctional/serial/KubectlGetPods 0.28
72 TestFunctional/serial/CacheCmd/cache/add_remote 6.43
73 TestFunctional/serial/CacheCmd/cache/add_local 3.62
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.27
75 TestFunctional/serial/CacheCmd/cache/list 0.26
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.85
77 TestFunctional/serial/CacheCmd/cache/cache_reload 4.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.53
79 TestFunctional/serial/MinikubeKubectlCmd 0.53
81 TestFunctional/serial/ExtraConfig 50.53
82 TestFunctional/serial/ComponentHealth 0.19
83 TestFunctional/serial/LogsCmd 2.45
84 TestFunctional/serial/LogsFileCmd 2.56
85 TestFunctional/serial/InvalidService 5.99
87 TestFunctional/parallel/ConfigCmd 1.63
89 TestFunctional/parallel/DryRun 2.95
90 TestFunctional/parallel/InternationalLanguage 1.1
91 TestFunctional/parallel/StatusCmd 3.33
96 TestFunctional/parallel/AddonsCmd 0.62
97 TestFunctional/parallel/PersistentVolumeClaim 51.33
99 TestFunctional/parallel/SSHCmd 1.54
100 TestFunctional/parallel/CpCmd 5.34
101 TestFunctional/parallel/MySQL 80.08
102 TestFunctional/parallel/FileSync 0.99
103 TestFunctional/parallel/CertSync 4.93
107 TestFunctional/parallel/NodeLabels 0.19
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.98
111 TestFunctional/parallel/License 3.29
112 TestFunctional/parallel/ServiceCmd/DeployApp 20.46
113 TestFunctional/parallel/ProfileCmd/profile_not_create 1.61
114 TestFunctional/parallel/ProfileCmd/profile_list 1.49
115 TestFunctional/parallel/ProfileCmd/profile_json_output 1.38
116 TestFunctional/parallel/Version/short 0.32
117 TestFunctional/parallel/Version/components 2.83
118 TestFunctional/parallel/ImageCommands/ImageListShort 1.21
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.82
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.7
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.82
122 TestFunctional/parallel/ImageCommands/ImageBuild 9.31
123 TestFunctional/parallel/ImageCommands/Setup 2.31
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.39
125 TestFunctional/parallel/ServiceCmd/List 1.11
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.13
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.92
128 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.88
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.47
131 TestFunctional/parallel/ImageCommands/ImageRemove 1.41
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.1
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.33
134 TestFunctional/parallel/ServiceCmd/Format 15.03
135 TestFunctional/parallel/DockerEnv/powershell 7.57
136 TestFunctional/parallel/UpdateContextCmd/no_changes 0.5
137 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.5
138 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.49
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.99
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.59
144 TestFunctional/parallel/ServiceCmd/URL 15.03
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.29
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
151 TestFunctional/delete_echo-server_images 0.21
152 TestFunctional/delete_my-image_image 0.08
153 TestFunctional/delete_minikube_cached_images 0.09
157 TestMultiControlPlane/serial/StartCluster 212.67
158 TestMultiControlPlane/serial/DeployApp 26.24
159 TestMultiControlPlane/serial/PingHostFromPods 3.85
160 TestMultiControlPlane/serial/AddWorkerNode 55.53
161 TestMultiControlPlane/serial/NodeLabels 0.21
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.18
163 TestMultiControlPlane/serial/CopyFile 49.5
164 TestMultiControlPlane/serial/StopSecondaryNode 14.21
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.73
166 TestMultiControlPlane/serial/RestartSecondaryNode 153.23
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.27
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.2
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.98
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.58
171 TestMultiControlPlane/serial/StopCluster 36.65
172 TestMultiControlPlane/serial/RestartCluster 109.84
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.59
174 TestMultiControlPlane/serial/AddSecondaryNode 76.39
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.25
178 TestImageBuild/serial/Setup 64.4
179 TestImageBuild/serial/NormalBuild 5.82
180 TestImageBuild/serial/BuildWithBuildArg 2.5
181 TestImageBuild/serial/BuildWithDockerIgnore 1.64
182 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.72
186 TestJSONOutput/start/Command 95.76
187 TestJSONOutput/start/Audit 0
189 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Command 1.55
193 TestJSONOutput/pause/Audit 0
195 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Command 1.26
199 TestJSONOutput/unpause/Audit 0
201 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/stop/Command 12.59
205 TestJSONOutput/stop/Audit 0
207 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
209 TestErrorJSONOutput 0.99
211 TestKicCustomNetwork/create_custom_network 82.46
212 TestKicCustomNetwork/use_default_bridge_network 70.51
213 TestKicExistingNetwork 71.2
214 TestKicCustomSubnet 70.77
215 TestKicStaticIP 72.44
216 TestMainNoArgs 0.23
217 TestMinikubeProfile 142.33
220 TestMountStart/serial/StartWithMountFirst 19.54
221 TestMountStart/serial/VerifyMountFirst 0.81
222 TestMountStart/serial/StartWithMountSecond 17.47
223 TestMountStart/serial/VerifyMountSecond 0.78
224 TestMountStart/serial/DeleteFirst 2.79
225 TestMountStart/serial/VerifyMountPostDelete 0.75
226 TestMountStart/serial/Stop 2.03
227 TestMountStart/serial/RestartStopped 12.95
228 TestMountStart/serial/VerifyMountPostStop 0.75
231 TestMultiNode/serial/FreshStart2Nodes 149.6
232 TestMultiNode/serial/DeployApp2Nodes 36.96
233 TestMultiNode/serial/PingHostFrom2Pods 2.57
234 TestMultiNode/serial/AddNode 51.02
235 TestMultiNode/serial/MultiNodeLabels 0.19
236 TestMultiNode/serial/ProfileList 1.01
237 TestMultiNode/serial/CopyFile 26.92
238 TestMultiNode/serial/StopNode 4.88
239 TestMultiNode/serial/StartAfterStop 18.63
240 TestMultiNode/serial/RestartKeepsNodes 117.25
241 TestMultiNode/serial/DeleteNode 10.21
242 TestMultiNode/serial/StopMultiNode 24.48
243 TestMultiNode/serial/RestartMultiNode 54.69
244 TestMultiNode/serial/ValidateNameConflict 64.48
248 TestPreload 178.57
249 TestScheduledStopWindows 134.2
253 TestInsufficientStorage 43.05
254 TestRunningBinaryUpgrade 223.31
256 TestKubernetesUpgrade 518.05
257 TestMissingContainerUpgrade 261.01
258 TestStoppedBinaryUpgrade/Setup 1.22
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.38
269 TestPause/serial/Start 142.61
270 TestNoKubernetes/serial/StartWithK8s 104.66
271 TestStoppedBinaryUpgrade/Upgrade 321.22
272 TestNoKubernetes/serial/StartWithStopK8s 28.65
273 TestNoKubernetes/serial/Start 31.96
274 TestPause/serial/SecondStartNoReconfiguration 51.96
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.79
276 TestNoKubernetes/serial/ProfileList 3.36
277 TestNoKubernetes/serial/Stop 2.39
278 TestNoKubernetes/serial/StartNoArgs 13.57
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.79
280 TestPause/serial/Pause 1.59
281 TestPause/serial/VerifyStatus 1.02
282 TestPause/serial/Unpause 1.57
283 TestPause/serial/PauseAgain 2.1
284 TestPause/serial/DeletePaused 5.57
285 TestPause/serial/VerifyDeletedResources 9.82
286 TestStoppedBinaryUpgrade/MinikubeLogs 3.13
299 TestStartStop/group/old-k8s-version/serial/FirstStart 217.28
301 TestStartStop/group/no-preload/serial/FirstStart 120.75
302 TestStartStop/group/no-preload/serial/DeployApp 10.84
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.61
304 TestStartStop/group/no-preload/serial/Stop 12.27
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.82
306 TestStartStop/group/no-preload/serial/SecondStart 320.05
308 TestStartStop/group/embed-certs/serial/FirstStart 130.68
309 TestStartStop/group/old-k8s-version/serial/DeployApp 14.6
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 97.87
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.5
313 TestStartStop/group/old-k8s-version/serial/Stop 16.61
314 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.93
315 TestStartStop/group/old-k8s-version/serial/SecondStart 404.75
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.72
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.26
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.74
319 TestStartStop/group/embed-certs/serial/DeployApp 12.02
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.87
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 281.25
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.32
323 TestStartStop/group/embed-certs/serial/Stop 12.52
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.85
325 TestStartStop/group/embed-certs/serial/SecondStart 298.61
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.4
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.64
329 TestStartStop/group/no-preload/serial/Pause 7.48
331 TestStartStop/group/newest-cni/serial/FirstStart 69.25
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.16
334 TestStartStop/group/newest-cni/serial/Stop 9.78
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.83
336 TestStartStop/group/newest-cni/serial/SecondStart 32.44
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.02
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.44
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.68
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.72
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.73
344 TestStartStop/group/newest-cni/serial/Pause 10.54
345 TestNetworkPlugins/group/auto/Start 120.65
346 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
347 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.26
348 TestNetworkPlugins/group/kindnet/Start 121.56
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.09
351 TestStartStop/group/old-k8s-version/serial/Pause 8.55
352 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.68
353 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.83
354 TestStartStop/group/embed-certs/serial/Pause 9.13
355 TestNetworkPlugins/group/calico/Start 170.03
356 TestNetworkPlugins/group/custom-flannel/Start 103.17
357 TestNetworkPlugins/group/auto/KubeletFlags 1
358 TestNetworkPlugins/group/auto/NetCatPod 21.01
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.99
361 TestNetworkPlugins/group/kindnet/NetCatPod 20.8
362 TestNetworkPlugins/group/auto/DNS 0.47
363 TestNetworkPlugins/group/auto/Localhost 0.47
364 TestNetworkPlugins/group/auto/HairPin 0.43
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.93
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 19.15
367 TestNetworkPlugins/group/kindnet/DNS 0.42
368 TestNetworkPlugins/group/kindnet/Localhost 0.55
369 TestNetworkPlugins/group/kindnet/HairPin 0.37
370 TestNetworkPlugins/group/custom-flannel/DNS 0.41
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.4
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.38
373 TestNetworkPlugins/group/false/Start 123.19
374 TestNetworkPlugins/group/calico/ControllerPod 6.01
375 TestNetworkPlugins/group/calico/KubeletFlags 0.97
376 TestNetworkPlugins/group/calico/NetCatPod 27.87
377 TestNetworkPlugins/group/enable-default-cni/Start 111.87
378 TestNetworkPlugins/group/flannel/Start 107.45
379 TestNetworkPlugins/group/calico/DNS 0.43
380 TestNetworkPlugins/group/calico/Localhost 0.64
381 TestNetworkPlugins/group/calico/HairPin 0.32
382 TestNetworkPlugins/group/kubenet/Start 109.6
383 TestNetworkPlugins/group/false/KubeletFlags 0.87
384 TestNetworkPlugins/group/false/NetCatPod 18.63
385 TestNetworkPlugins/group/flannel/ControllerPod 6.02
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.81
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.64
388 TestNetworkPlugins/group/false/DNS 0.38
389 TestNetworkPlugins/group/false/Localhost 0.46
390 TestNetworkPlugins/group/false/HairPin 0.42
391 TestNetworkPlugins/group/flannel/KubeletFlags 1.01
392 TestNetworkPlugins/group/flannel/NetCatPod 20.64
393 TestNetworkPlugins/group/enable-default-cni/DNS 0.38
394 TestNetworkPlugins/group/enable-default-cni/Localhost 0.35
395 TestNetworkPlugins/group/enable-default-cni/HairPin 0.35
396 TestNetworkPlugins/group/flannel/DNS 0.45
397 TestNetworkPlugins/group/flannel/Localhost 0.34
398 TestNetworkPlugins/group/flannel/HairPin 0.39
399 TestNetworkPlugins/group/bridge/Start 95.65
400 TestNetworkPlugins/group/kubenet/KubeletFlags 0.85
401 TestNetworkPlugins/group/kubenet/NetCatPod 20.79
402 TestNetworkPlugins/group/kubenet/DNS 0.35
403 TestNetworkPlugins/group/kubenet/Localhost 0.31
404 TestNetworkPlugins/group/kubenet/HairPin 0.33
405 TestNetworkPlugins/group/bridge/KubeletFlags 0.78
406 TestNetworkPlugins/group/bridge/NetCatPod 16.63
407 TestNetworkPlugins/group/bridge/DNS 0.34
408 TestNetworkPlugins/group/bridge/Localhost 0.3
409 TestNetworkPlugins/group/bridge/HairPin 0.31
x
+
TestDownloadOnly/v1.20.0/json-events (8.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-216600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-216600 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (8.5561855s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-216600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-216600: exit status 85 (311.1178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-216600 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |          |
	|         | -p download-only-216600        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:44
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:44.618959    3016 out.go:345] Setting OutFile to fd 752 ...
	I0915 06:29:44.692083    3016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:44.692083    3016 out.go:358] Setting ErrFile to fd 756...
	I0915 06:29:44.692083    3016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0915 06:29:44.707407    3016 root.go:314] Error reading config file at C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0915 06:29:44.718372    3016 out.go:352] Setting JSON to true
	I0915 06:29:44.721896    3016 start.go:129] hostinfo: {"hostname":"minikube2","uptime":5557,"bootTime":1726376226,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:29:44.721896    3016 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:29:44.730988    3016 out.go:97] [download-only-216600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:29:44.731926    3016 notify.go:220] Checking for updates...
	W0915 06:29:44.731926    3016 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0915 06:29:44.734619    3016 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:29:44.740982    3016 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:29:44.746995    3016 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:44.754100    3016 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0915 06:29:44.761148    3016 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:44.762232    3016 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:44.958490    3016 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:29:44.965494    3016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:45.317873    3016 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:29:45.289396759 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:29:45.331443    3016 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:45.332118    3016 start.go:297] selected driver: docker
	I0915 06:29:45.332178    3016 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:45.348772    3016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:45.703467    3016 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:29:45.681382433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:29:45.704459    3016 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:45.772099    3016 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0915 06:29:45.772553    3016 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:45.780821    3016 out.go:169] Using Docker Desktop driver with root privileges
	I0915 06:29:45.783975    3016 cni.go:84] Creating CNI manager for ""
	I0915 06:29:45.783975    3016 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0915 06:29:45.785159    3016 start.go:340] cluster config:
	{Name:download-only-216600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-216600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:45.787700    3016 out.go:97] Starting "download-only-216600" primary control-plane node in "download-only-216600" cluster
	I0915 06:29:45.787700    3016 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:29:45.791641    3016 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:45.792671    3016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 06:29:45.792671    3016 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:45.839340    3016 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0915 06:29:45.839480    3016 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:45.840000    3016 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0915 06:29:45.845213    3016 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 06:29:45.845213    3016 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:29:45.878947    3016 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:45.879073    3016 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:29:45.879073    3016 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:29:45.879073    3016 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:45.880306    3016 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:45.914890    3016 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0915 06:29:49.540528    3016 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:29:49.575881    3016 preload.go:254] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-216600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-216600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.9611707s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-216600
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-425100 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-425100 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker: (7.3738788s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-425100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-425100: exit status 85 (323.2668ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-216600 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-216600        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| delete  | -p download-only-216600        | download-only-216600 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC | 15 Sep 24 06:29 UTC |
	| start   | -o=json --download-only        | download-only-425100 | minikube2\jenkins | v1.34.0 | 15 Sep 24 06:29 UTC |                     |
	|         | -p download-only-425100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:29:56
	Running on machine: minikube2
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:29:56.239336   10468 out.go:345] Setting OutFile to fd 876 ...
	I0915 06:29:56.320903   10468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:56.320903   10468 out.go:358] Setting ErrFile to fd 880...
	I0915 06:29:56.320903   10468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:29:56.345852   10468 out.go:352] Setting JSON to true
	I0915 06:29:56.349779   10468 start.go:129] hostinfo: {"hostname":"minikube2","uptime":5569,"bootTime":1726376226,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:29:56.349779   10468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:29:56.640238   10468 out.go:97] [download-only-425100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:29:56.640961   10468 notify.go:220] Checking for updates...
	I0915 06:29:56.646244   10468 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:29:56.656995   10468 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:29:56.667710   10468 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:29:56.676976   10468 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0915 06:29:56.684471   10468 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:29:56.685747   10468 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:29:56.877976   10468 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:29:56.886460   10468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:57.213175   10468 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:29:57.184837605 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:29:57.219821   10468 out.go:97] Using the docker driver based on user configuration
	I0915 06:29:57.219821   10468 start.go:297] selected driver: docker
	I0915 06:29:57.219919   10468 start.go:901] validating driver "docker" against <nil>
	I0915 06:29:57.233863   10468 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:29:57.571930   10468 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-15 06:29:57.543335443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:29:57.572155   10468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:29:57.622630   10468 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0915 06:29:57.624077   10468 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:29:57.638608   10468 out.go:169] Using Docker Desktop driver with root privileges
	I0915 06:29:57.644678   10468 cni.go:84] Creating CNI manager for ""
	I0915 06:29:57.644820   10468 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0915 06:29:57.644820   10468 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0915 06:29:57.644979   10468 start.go:340] cluster config:
	{Name:download-only-425100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-425100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:29:57.650139   10468 out.go:97] Starting "download-only-425100" primary control-plane node in "download-only-425100" cluster
	I0915 06:29:57.650139   10468 cache.go:121] Beginning downloading kic base image for docker with docker
	I0915 06:29:57.657119   10468 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:29:57.658093   10468 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:29:57.658093   10468 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:29:57.696706   10468 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:29:57.696706   10468 cache.go:56] Caching tarball of preloaded images
	I0915 06:29:57.697262   10468 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0915 06:29:57.741422   10468 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:29:57.741422   10468 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:29:57.742351   10468 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726358845-19644@sha256_4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0.tar
	I0915 06:29:57.742351   10468 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:29:57.742351   10468 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0915 06:29:57.743421   10468 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:29:57.779897   10468 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:29:57.779897   10468 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:29:57.780573   10468 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:29:57.805162   10468 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0915 06:30:01.305023   10468 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0915 06:30:01.305960   10468 preload.go:254] verifying checksum of C:\Users\jenkins.minikube2\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-425100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-425100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2347254s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-425100
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.71s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.23s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-561200 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-561200 --alsologtostderr --driver=docker: (1.4759323s)
helpers_test.go:175: Cleaning up "download-docker-561200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-561200
--- PASS: TestDownloadOnlyKic (3.23s)

                                                
                                    
x
+
TestBinaryMirror (2.98s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-364200 --alsologtostderr --binary-mirror http://127.0.0.1:64816 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-364200 --alsologtostderr --binary-mirror http://127.0.0.1:64816 --driver=docker: (1.915744s)
helpers_test.go:175: Cleaning up "binary-mirror-364200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-364200
--- PASS: TestBinaryMirror (2.98s)

                                                
                                    
x
+
TestOffline (148.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-516600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-516600 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m23.3947506s)
helpers_test.go:175: Cleaning up "offline-docker-516600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-516600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-516600: (5.4885663s)
--- PASS: TestOffline (148.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-291300
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-291300: exit status 85 (278.4765ms)

                                                
                                                
-- stdout --
	* Profile "addons-291300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291300"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-291300
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-291300: exit status 85 (262.001ms)

                                                
                                                
-- stdout --
	* Profile "addons-291300" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-291300"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/Setup (563.83s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-291300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-291300 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (9m23.8271071s)
--- PASS: TestAddons/Setup (563.83s)

                                                
                                    
x
+
TestAddons/serial/Volcano (55.46s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 25.3657ms
addons_test.go:913: volcano-controller stabilized in 25.3657ms
addons_test.go:905: volcano-admission stabilized in 25.3657ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dw5kn" [224fec29-c295-4948-a93f-53c79cb85072] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0093307s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-tjxcx" [860f4897-a2ad-460b-a53c-eaf3de9a6267] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0084321s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dlp8n" [7a70c486-8cb5-421a-8065-7abf6bb47f3f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0094669s
addons_test.go:932: (dbg) Run:  kubectl --context addons-291300 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-291300 create -f testdata\vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-291300 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e43b69eb-5a0f-41ae-b2ea-4368cf53b823] Pending
helpers_test.go:344: "test-job-nginx-0" [e43b69eb-5a0f-41ae-b2ea-4368cf53b823] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e43b69eb-5a0f-41ae-b2ea-4368cf53b823] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 27.015815s
addons_test.go:968: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable volcano --alsologtostderr -v=1: (11.4672274s)
--- PASS: TestAddons/serial/Volcano (55.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-291300 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-291300 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.37s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (15.15s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vn22b" [c3ef9a5c-07c0-42fc-a2f5-c189490c83cb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.010842s
addons_test.go:851: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-291300
addons_test.go:851: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-291300: (9.1386809s)
--- PASS: TestAddons/parallel/InspektorGadget (15.15s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 8.6784ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fmjgd" [247331d3-cf85-4e2f-8739-94d511c0a400] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014884s
addons_test.go:417: (dbg) Run:  kubectl --context addons-291300 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable metrics-server --alsologtostderr -v=1: (1.4706574s)
--- PASS: TestAddons/parallel/MetricsServer (6.68s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.19s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 6.0268ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-cpg74" [6745c867-b5d7-4421-982e-0bec5d59ee9b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0074695s
addons_test.go:475: (dbg) Run:  kubectl --context addons-291300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-291300 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.7292656s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable helm-tiller --alsologtostderr -v=1: (2.4339995s)
--- PASS: TestAddons/parallel/HelmTiller (16.19s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 22.3525ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-291300 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-291300 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d8cff09d-b465-4614-a68a-c1a3c6518bcf] Pending
helpers_test.go:344: "task-pv-pod" [d8cff09d-b465-4614-a68a-c1a3c6518bcf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d8cff09d-b465-4614-a68a-c1a3c6518bcf] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.0082716s
addons_test.go:590: (dbg) Run:  kubectl --context addons-291300 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-291300 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-291300 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-291300 delete pod task-pv-pod: (1.3329511s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-291300 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-291300 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-291300 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fea7a48e-5700-4867-8c99-541ec996de6a] Pending
helpers_test.go:344: "task-pv-pod-restore" [fea7a48e-5700-4867-8c99-541ec996de6a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fea7a48e-5700-4867-8c99-541ec996de6a] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0084245s
addons_test.go:632: (dbg) Run:  kubectl --context addons-291300 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-291300 delete pod task-pv-pod-restore: (2.6834586s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-291300 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-291300 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.6419283s)
addons_test.go:648: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable volumesnapshots --alsologtostderr -v=1: (2.4639431s)
--- PASS: TestAddons/parallel/CSI (63.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (30.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-291300 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-291300 --alsologtostderr -v=1: (3.026504s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-g6chv" [49aaf2a1-68ea-46e0-8211-6a47b8b94047] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-g6chv" [49aaf2a1-68ea-46e0-8211-6a47b8b94047] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-g6chv" [49aaf2a1-68ea-46e0-8211-6a47b8b94047] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.0077886s
addons_test.go:839: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable headlamp --alsologtostderr -v=1: (6.5538507s)
--- PASS: TestAddons/parallel/Headlamp (30.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-llzpj" [a3f5b280-fc33-4010-a144-4165e7b668c4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0081638s
addons_test.go:870: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-291300
addons_test.go:870: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-291300: (1.4712792s)
--- PASS: TestAddons/parallel/CloudSpanner (7.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-291300 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-291300 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [023f610e-cc65-4291-a35e-29657fa7d80a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [023f610e-cc65-4291-a35e-29657fa7d80a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [023f610e-cc65-4291-a35e-29657fa7d80a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0125776s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-291300 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 ssh "cat /opt/local-path-provisioner/pvc-0c1fc810-3bbb-4ea2-b810-dcc494d81a59_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-291300 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-291300 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4b4bn" [fbde28c1-eb98-436e-91b6-a94e359bc1a4] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0114849s
addons_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-291300
addons_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-291300: (1.4933601s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5km24" [1f71317e-2de3-4d90-b548-9456091cd7e6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0081046s
addons_test.go:1076: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-windows-amd64.exe -p addons-291300 addons disable yakd --alsologtostderr -v=1: (6.598279s)
--- PASS: TestAddons/parallel/Yakd (12.61s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.82s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-291300
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-291300: (12.5040438s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-291300
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-291300
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-291300
--- PASS: TestAddons/StoppedEnableDisable (13.82s)

                                                
                                    
x
+
TestCertOptions (75.47s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-189700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E0915 07:54:37.824849    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-189700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m8.7473757s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-189700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-189700 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-189700 -- "sudo cat /etc/kubernetes/admin.conf": (1.006005s)
helpers_test.go:175: Cleaning up "cert-options-189700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-189700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-189700: (4.7430252s)
--- PASS: TestCertOptions (75.47s)

                                                
                                    
x
+
TestCertExpiration (298.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-583000 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-583000 --memory=2048 --cert-expiration=3m --driver=docker: (1m15.6809583s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-583000 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-583000 --memory=2048 --cert-expiration=8760h --driver=docker: (37.4666475s)
helpers_test.go:175: Cleaning up "cert-expiration-583000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-583000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-583000: (5.4262026s)
--- PASS: TestCertExpiration (298.57s)

                                                
                                    
x
+
TestDockerFlags (74.28s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-640000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-640000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m8.0919743s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-640000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-640000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-640000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-640000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-640000: (4.4211179s)
--- PASS: TestDockerFlags (74.28s)

                                                
                                    
x
+
TestForceSystemdFlag (113.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-000900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-000900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m48.0811541s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-000900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-000900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-000900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-000900: (4.8079078s)
--- PASS: TestForceSystemdFlag (113.82s)

                                                
                                    
x
+
TestForceSystemdEnv (96.73s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-084200 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-084200 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m30.6216258s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-084200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-084200 ssh "docker info --format {{.CgroupDriver}}": (1.2366934s)
helpers_test.go:175: Cleaning up "force-systemd-env-084200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-084200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-084200: (4.8714497s)
--- PASS: TestForceSystemdEnv (96.73s)

                                                
                                    
x
+
TestErrorSpam/start (4.02s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run: (1.2884996s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run: (1.3305559s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 start --dry-run: (1.3927811s)
--- PASS: TestErrorSpam/start (4.02s)

                                                
                                    
x
+
TestErrorSpam/status (2.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 status: (1.0247278s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 status
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 status
--- PASS: TestErrorSpam/status (2.93s)

                                                
                                    
x
+
TestErrorSpam/pause (3.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 pause: (1.5264445s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 pause
--- PASS: TestErrorSpam/pause (3.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause: (1.2721355s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause: (1.32057s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 unpause: (1.0403432s)
--- PASS: TestErrorSpam/unpause (3.64s)

                                                
                                    
x
+
TestErrorSpam/stop (14.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop: (6.961423s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop: (3.9543441s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-382400 --log_dir C:\Users\jenkins.minikube2\AppData\Local\Temp\nospam-382400 stop: (3.9685198s)
--- PASS: TestErrorSpam/stop (14.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube2\minikube-integration\.minikube\files\etc\test\nested\copy\8584\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (93.48s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m33.4684954s)
--- PASS: TestFunctional/serial/StartWithProxy (93.48s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804700 --alsologtostderr -v=8: (43.071082s)
functional_test.go:663: soft start took 43.072168s for "functional-804700" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-804700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:3.1: (2.2462861s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:3.3: (2.0817766s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 cache add registry.k8s.io/pause:latest: (2.1027158s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-804700 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local627899452\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-804700 C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local627899452\001: (1.713812s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache add minikube-local-cache-test:functional-804700
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 cache add minikube-local-cache-test:functional-804700: (1.5222439s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache delete minikube-local-cache-test:functional-804700
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-804700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (795.1026ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 cache reload: (1.6684713s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 kubectl -- --context functional-804700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0915 06:54:37.792227    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:37.805149    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:37.818149    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:37.841133    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:37.884156    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:37.966643    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:38.128847    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:38.455365    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:39.098081    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:40.381388    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:42.943246    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:48.066515    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:54:58.309533    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 06:55:18.792852    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.5268009s)
functional_test.go:761: restart took 50.5268009s for "functional-804700" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (50.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-804700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 logs: (2.4459236s)
--- PASS: TestFunctional/serial/LogsCmd (2.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2485873946\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 logs --file C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2485873946\001\logs.txt: (2.5511493s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-804700 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-804700
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-804700: exit status 115 (1.1527017s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32008 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_service_6bd82f1fe87f7552f02cc11dc4370801e3dafecc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-804700 delete -f testdata\invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-804700 delete -f testdata\invalidsvc.yaml: (1.3484575s)
--- PASS: TestFunctional/serial/InvalidService (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 config get cpus: exit status 14 (251.9436ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 config get cpus: exit status 14 (233.4881ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.675273s)

                                                
                                                
-- stdout --
	* [functional-804700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:55:45.954975    2224 out.go:345] Setting OutFile to fd 816 ...
	I0915 06:55:46.043973    2224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:55:46.043973    2224 out.go:358] Setting ErrFile to fd 1112...
	I0915 06:55:46.043973    2224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:55:46.069957    2224 out.go:352] Setting JSON to false
	I0915 06:55:46.072950    2224 start.go:129] hostinfo: {"hostname":"minikube2","uptime":7119,"bootTime":1726376226,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:55:46.073993    2224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:55:46.078006    2224 out.go:177] * [functional-804700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:55:46.080956    2224 notify.go:220] Checking for updates...
	I0915 06:55:46.083958    2224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:55:46.086952    2224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:55:46.088949    2224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:55:46.091948    2224 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:55:46.093981    2224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:55:46.097957    2224 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:55:46.099945    2224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:55:46.305918    2224 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:55:46.316453    2224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:55:46.694224    2224 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2024-09-15 06:55:46.669135417 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:55:47.297081    2224 out.go:177] * Using the docker driver based on existing profile
	I0915 06:55:47.305901    2224 start.go:297] selected driver: docker
	I0915 06:55:47.305901    2224 start.go:901] validating driver "docker" against &{Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:55:47.306911    2224 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:55:47.409562    2224 out.go:201] 
	W0915 06:55:47.416552    2224 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:55:47.461567    2224 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --alsologtostderr -v=1 --driver=docker: (1.2751724s)
--- PASS: TestFunctional/parallel/DryRun (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-804700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0962084s)

                                                
                                                
-- stdout --
	* [functional-804700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:55:44.868731    4268 out.go:345] Setting OutFile to fd 1388 ...
	I0915 06:55:44.970727    4268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:55:44.970727    4268 out.go:358] Setting ErrFile to fd 1432...
	I0915 06:55:44.970727    4268 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:55:44.995732    4268 out.go:352] Setting JSON to false
	I0915 06:55:44.997727    4268 start.go:129] hostinfo: {"hostname":"minikube2","uptime":7118,"bootTime":1726376226,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"29dced62-21fb-45d8-a34e-472b66ced002"}
	W0915 06:55:44.997727    4268 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0915 06:55:45.003726    4268 out.go:177] * [functional-804700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0915 06:55:45.007724    4268 notify.go:220] Checking for updates...
	I0915 06:55:45.010723    4268 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	I0915 06:55:45.011734    4268 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:55:45.015885    4268 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	I0915 06:55:45.019136    4268 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:55:45.021323    4268 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:55:45.025308    4268 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 06:55:45.027432    4268 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:55:45.239314    4268 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0915 06:55:45.249309    4268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:55:45.649997    4268 info.go:266] docker info: {ID:e770b6ad-f18b-4184-94e7-b0fdb570deb0 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2024-09-15 06:55:45.620392168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0915 06:55:45.653995    4268 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 06:55:45.656005    4268 start.go:297] selected driver: docker
	I0915 06:55:45.656005    4268 start.go:901] validating driver "docker" against &{Name:functional-804700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-804700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube2:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:55:45.656005    4268 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:55:45.783002    4268 out.go:201] 
	W0915 06:55:45.786007    4268 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:55:45.789003    4268 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 status
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 status: (1.1835685s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (1.153006s)
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (51.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6bdedbb5-d79b-4900-b85f-af6c118ddab0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0075882s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-804700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-804700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-804700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d315869a-89fd-4347-bf81-f78b5feaccec] Pending
helpers_test.go:344: "sp-pod" [d315869a-89fd-4347-bf81-f78b5feaccec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d315869a-89fd-4347-bf81-f78b5feaccec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.0160649s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-804700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-804700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-804700 delete -f testdata/storage-provisioner/pod.yaml: (1.925373s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-804700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [090813e3-feb7-4685-9864-0acb8f3fc212] Pending
helpers_test.go:344: "sp-pod" [090813e3-feb7-4685-9864-0acb8f3fc212] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [090813e3-feb7-4685-9864-0acb8f3fc212] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0095211s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-804700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (51.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (5.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh -n functional-804700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cp functional-804700:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestFunctionalparallelCpCmd2356418905\001\cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh -n functional-804700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 ssh -n functional-804700 "sudo cat /home/docker/cp-test.txt": (1.0249598s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh -n functional-804700 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 ssh -n functional-804700 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.0655949s)
--- PASS: TestFunctional/parallel/CpCmd (5.34s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (80.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-804700 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-mf8br" [d716fba1-927f-4ffa-bcbc-aabd5c993636] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-mf8br" [d716fba1-927f-4ffa-bcbc-aabd5c993636] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m4.0088717s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;": exit status 1 (291.5105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;": exit status 1 (300.781ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;": exit status 1 (300.6446ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;": exit status 1 (332.741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;": exit status 1 (284.3449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-804700 exec mysql-6cdb49bbb-mf8br -- mysql -ppassword -e "show databases;"
E0915 06:59:37.795006    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:00:05.522909    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (80.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/8584/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /etc/test/nested/copy/8584/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/8584.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /etc/ssl/certs/8584.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/8584.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /usr/share/ca-certificates/8584.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/85842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /etc/ssl/certs/85842.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/85842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /usr/share/ca-certificates/85842.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-804700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 ssh "sudo systemctl is-active crio": exit status 1 (981.7575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.2778847s)
--- PASS: TestFunctional/parallel/License (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-804700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-804700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-k45bs" [11245fdb-13fc-4416-971e-bd8a7670ee5a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-k45bs" [11245fdb-13fc-4416-971e-bd8a7670ee5a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0088873s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.2550612s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.1359144s)
functional_test.go:1315: Took "1.1359144s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "352.9922ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.1185863s)
functional_test.go:1366: Took "1.1185863s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "257.9917ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 version -o=json --components: (2.8323563s)
--- PASS: TestFunctional/parallel/Version/components (2.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image ls --format short --alsologtostderr: (1.2114209s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-804700
docker.io/kicbase/echo-server:functional-804700
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804700 image ls --format short --alsologtostderr:
I0915 06:56:47.196640   10236 out.go:345] Setting OutFile to fd 968 ...
I0915 06:56:47.297641   10236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:47.297641   10236 out.go:358] Setting ErrFile to fd 1168...
I0915 06:56:47.297641   10236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:47.316661   10236 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:47.317669   10236 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:47.347650   10236 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
I0915 06:56:47.451648   10236 ssh_runner.go:195] Run: systemctl --version
I0915 06:56:47.462663   10236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
I0915 06:56:47.571651   10236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
I0915 06:56:47.934264   10236 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-804700 | 9056ab77afb8e | 4.94MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/library/minikube-local-cache-test | functional-804700 | 886e8cb4f97e8 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804700 image ls --format table --alsologtostderr:
I0915 06:56:50.176954    4204 out.go:345] Setting OutFile to fd 1580 ...
I0915 06:56:50.275837    4204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:50.275837    4204 out.go:358] Setting ErrFile to fd 1520...
I0915 06:56:50.275924    4204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:50.294723    4204 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:50.295740    4204 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:50.320728    4204 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
I0915 06:56:50.409683    4204 ssh_runner.go:195] Run: systemctl --version
I0915 06:56:50.417683    4204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
I0915 06:56:50.511722    4204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
I0915 06:56:50.722470    4204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804700 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-804700"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"886e8cb4f97e8d3eef8c048aaf4e5761d584eaebbe99e0478478d39533b94442","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-804700"],"size":"30"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c69fa2e9cbf5f42dc48af631e9
56d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k
8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804700 image ls --format json --alsologtostderr:
I0915 06:56:49.481420    5752 out.go:345] Setting OutFile to fd 1664 ...
I0915 06:56:49.576857    5752 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:49.576857    5752 out.go:358] Setting ErrFile to fd 1568...
I0915 06:56:49.576857    5752 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:49.591714    5752 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:49.592724    5752 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:49.607713    5752 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
I0915 06:56:49.706568    5752 ssh_runner.go:195] Run: systemctl --version
I0915 06:56:49.713551    5752 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
I0915 06:56:49.788558    5752 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
I0915 06:56:49.951253    5752 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-804700 image ls --format yaml --alsologtostderr:
- id: 886e8cb4f97e8d3eef8c048aaf4e5761d584eaebbe99e0478478d39533b94442
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-804700
size: "30"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-804700
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804700 image ls --format yaml --alsologtostderr:
I0915 06:56:48.377764    9756 out.go:345] Setting OutFile to fd 1660 ...
I0915 06:56:48.472605    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:48.472605    9756 out.go:358] Setting ErrFile to fd 1664...
I0915 06:56:48.472605    9756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:48.487605    9756 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:48.488170    9756 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:48.502606    9756 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
I0915 06:56:48.607474    9756 ssh_runner.go:195] Run: systemctl --version
I0915 06:56:48.618397    9756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
I0915 06:56:48.697350    9756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
I0915 06:56:48.923057    9756 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 ssh pgrep buildkitd: exit status 1 (805.4704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image build -t localhost/my-image:functional-804700 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image build -t localhost/my-image:functional-804700 testdata\build --alsologtostderr: (7.8160667s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-804700 image build -t localhost/my-image:functional-804700 testdata\build --alsologtostderr:
I0915 06:56:50.011267    3256 out.go:345] Setting OutFile to fd 1468 ...
I0915 06:56:50.115379    3256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:50.115379    3256 out.go:358] Setting ErrFile to fd 1492...
I0915 06:56:50.115379    3256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:56:50.152632    3256 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:50.172298    3256 config.go:182] Loaded profile config "functional-804700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0915 06:56:50.186907    3256 cli_runner.go:164] Run: docker container inspect functional-804700 --format={{.State.Status}}
I0915 06:56:50.286720    3256 ssh_runner.go:195] Run: systemctl --version
I0915 06:56:50.294723    3256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-804700
I0915 06:56:50.370683    3256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49866 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\functional-804700\id_rsa Username:docker}
I0915 06:56:50.507083    3256 build_images.go:161] Building image from path: C:\Users\jenkins.minikube2\AppData\Local\Temp\build.2694754554.tar
I0915 06:56:50.522765    3256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:56:50.628217    3256 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2694754554.tar
I0915 06:56:50.639456    3256 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2694754554.tar: stat -c "%s %y" /var/lib/minikube/build/build.2694754554.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2694754554.tar': No such file or directory
I0915 06:56:50.639456    3256 ssh_runner.go:362] scp C:\Users\jenkins.minikube2\AppData\Local\Temp\build.2694754554.tar --> /var/lib/minikube/build/build.2694754554.tar (3072 bytes)
I0915 06:56:50.749827    3256 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2694754554
I0915 06:56:50.827419    3256 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2694754554 -xf /var/lib/minikube/build/build.2694754554.tar
I0915 06:56:50.853766    3256 docker.go:360] Building image: /var/lib/minikube/build/build.2694754554
I0915 06:56:50.864941    3256 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-804700 /var/lib/minikube/build/build.2694754554
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 2.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.3s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:379aedde7f8cda47640b0106096caaf4fd83d3eb3313f879b0b608ce385d0f9e
#8 writing image sha256:379aedde7f8cda47640b0106096caaf4fd83d3eb3313f879b0b608ce385d0f9e 0.0s done
#8 naming to localhost/my-image:functional-804700 0.0s done
#8 DONE 0.2s
I0915 06:56:57.594755    3256 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-804700 /var/lib/minikube/build/build.2694754554: (6.7297591s)
I0915 06:56:57.608033    3256 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2694754554
I0915 06:56:57.645622    3256 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2694754554.tar
I0915 06:56:57.671406    3256 build_images.go:217] Built localhost/my-image:functional-804700 from C:\Users\jenkins.minikube2\AppData\Local\Temp\build.2694754554.tar
I0915 06:56:57.671503    3256 build_images.go:133] succeeded building to: functional-804700
I0915 06:56:57.671634    3256 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
E0915 06:57:21.679022    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.1626392s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-804700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr: (2.6879869s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 service list: (1.1145474s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr
E0915 06:55:59.756176    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr: (1.4904877s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 service list -o json
functional_test.go:1494: Took "919.5431ms" to run "out/minikube-windows-amd64.exe -p functional-804700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 service --namespace=default --https --url hello-node: exit status 1 (15.0119607s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50261

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:50261
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-804700
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image load --daemon kicbase/echo-server:functional-804700 --alsologtostderr: (1.3709505s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image save kicbase/echo-server:functional-804700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image save kicbase/echo-server:functional-804700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.4701191s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image rm kicbase/echo-server:functional-804700 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.4255988s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-804700
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 image save --daemon kicbase/echo-server:functional-804700 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-804700 image save --daemon kicbase/echo-server:functional-804700 --alsologtostderr: (1.1346269s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-804700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 service hello-node --url --format={{.IP}}: exit status 1 (15.0302239s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-804700"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-804700": (4.2659243s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804700 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-804700 docker-env | Invoke-Expression ; docker images": (3.2955505s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4544: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 4756: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-804700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bfea4f05-e00a-44ae-a899-685724df3bb9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bfea4f05-e00a-44ae-a899-685724df3bb9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0078413s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-804700 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-804700 service hello-node --url: exit status 1 (15.0319762s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50323

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:50323
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-804700 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-804700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 12480: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4528: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.21s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-804700
--- PASS: TestFunctional/delete_echo-server_images (0.21s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-804700
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-804700
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-062300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-062300 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m30.3178486s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.3546697s)
--- PASS: TestMultiControlPlane/serial/StartCluster (212.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (26.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- rollout status deployment/busybox
E0915 07:04:37.797174    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062300 -- rollout status deployment/busybox: (16.0403753s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- nslookup kubernetes.io: (1.8985895s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- nslookup kubernetes.io: (1.6050043s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- nslookup kubernetes.io: (1.5831897s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (26.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-tg6hv -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-v9sb5 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062300 -- exec busybox-7dff88458-w6ct5 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-062300 -v=7 --alsologtostderr
E0915 07:05:38.639725    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.647199    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.659417    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.681244    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.724119    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.806990    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:38.969116    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:39.291752    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:39.933644    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:41.217015    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:43.779464    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:05:48.901811    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-062300 -v=7 --alsologtostderr: (52.6728094s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.8574331s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-062300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1750873s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (49.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status --output json -v=7 --alsologtostderr: (2.8542231s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp testdata\cp-test.txt ha-062300:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3106551191\001\cp-test_ha-062300.txt
E0915 07:05:59.144160    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300_ha-062300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300_ha-062300-m02.txt: (1.1015363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test_ha-062300_ha-062300-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300_ha-062300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300_ha-062300-m03.txt: (1.245827s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test_ha-062300_ha-062300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300_ha-062300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300_ha-062300-m04.txt: (1.1544197s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test_ha-062300_ha-062300-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp testdata\cp-test.txt ha-062300-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3106551191\001\cp-test_ha-062300-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m02_ha-062300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m02_ha-062300.txt: (1.2029078s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test_ha-062300-m02_ha-062300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300-m02_ha-062300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300-m02_ha-062300-m03.txt: (1.163163s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test_ha-062300-m02_ha-062300-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300-m02_ha-062300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m02:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300-m02_ha-062300-m04.txt: (1.183572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test_ha-062300-m02_ha-062300-m04.txt"
E0915 07:06:19.626687    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp testdata\cp-test.txt ha-062300-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3106551191\001\cp-test_ha-062300-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m03_ha-062300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m03_ha-062300.txt: (1.2143047s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test_ha-062300-m03_ha-062300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300-m03_ha-062300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300-m03_ha-062300-m02.txt: (1.1502994s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test_ha-062300-m03_ha-062300-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300-m03_ha-062300-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m03:/home/docker/cp-test.txt ha-062300-m04:/home/docker/cp-test_ha-062300-m03_ha-062300-m04.txt: (1.1866143s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test_ha-062300-m03_ha-062300-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp testdata\cp-test.txt ha-062300-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3106551191\001\cp-test_ha-062300-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m04_ha-062300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300:/home/docker/cp-test_ha-062300-m04_ha-062300.txt: (1.1900883s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300 "sudo cat /home/docker/cp-test_ha-062300-m04_ha-062300.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300-m04_ha-062300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300-m02:/home/docker/cp-test_ha-062300-m04_ha-062300-m02.txt: (1.1742494s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m02 "sudo cat /home/docker/cp-test_ha-062300-m04_ha-062300-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300-m04_ha-062300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 cp ha-062300-m04:/home/docker/cp-test.txt ha-062300-m03:/home/docker/cp-test_ha-062300-m04_ha-062300-m03.txt: (1.2474618s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 ssh -n ha-062300-m03 "sudo cat /home/docker/cp-test_ha-062300-m04_ha-062300-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (49.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 node stop m02 -v=7 --alsologtostderr: (11.9702207s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: exit status 7 (2.2416609s)

                                                
                                                
-- stdout --
	ha-062300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-062300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-062300-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-062300-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:06:56.007585    1064 out.go:345] Setting OutFile to fd 1736 ...
	I0915 07:06:56.090394    1064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:56.090560    1064 out.go:358] Setting ErrFile to fd 1728...
	I0915 07:06:56.090560    1064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:56.104208    1064 out.go:352] Setting JSON to false
	I0915 07:06:56.104208    1064 mustload.go:65] Loading cluster: ha-062300
	I0915 07:06:56.104208    1064 notify.go:220] Checking for updates...
	I0915 07:06:56.104208    1064 config.go:182] Loaded profile config "ha-062300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:06:56.104208    1064 status.go:255] checking status of ha-062300 ...
	I0915 07:06:56.121955    1064 cli_runner.go:164] Run: docker container inspect ha-062300 --format={{.State.Status}}
	I0915 07:06:56.221783    1064 status.go:330] ha-062300 host status = "Running" (err=<nil>)
	I0915 07:06:56.221896    1064 host.go:66] Checking if "ha-062300" exists ...
	I0915 07:06:56.234294    1064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-062300
	I0915 07:06:56.308360    1064 host.go:66] Checking if "ha-062300" exists ...
	I0915 07:06:56.321009    1064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:56.327617    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-062300
	I0915 07:06:56.401273    1064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-062300\id_rsa Username:docker}
	I0915 07:06:56.560223    1064 ssh_runner.go:195] Run: systemctl --version
	I0915 07:06:56.584256    1064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:56.620946    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-062300
	I0915 07:06:56.706927    1064 kubeconfig.go:125] found "ha-062300" server: "https://127.0.0.1:50520"
	I0915 07:06:56.706927    1064 api_server.go:166] Checking apiserver status ...
	I0915 07:06:56.719937    1064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:06:56.776659    1064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2415/cgroup
	I0915 07:06:56.802566    1064 api_server.go:182] apiserver freezer: "7:freezer:/docker/a38ab95e6a69d44466821f1ec9bb636116f287d5ca6d271c7580e10ed5c391ac/kubepods/burstable/podff2d4cc60a5b463c09929b193d996c19/0c45c915288ccf01583cd70d32cccac6f198e2844148487f1486c6157726d481"
	I0915 07:06:56.815455    1064 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a38ab95e6a69d44466821f1ec9bb636116f287d5ca6d271c7580e10ed5c391ac/kubepods/burstable/podff2d4cc60a5b463c09929b193d996c19/0c45c915288ccf01583cd70d32cccac6f198e2844148487f1486c6157726d481/freezer.state
	I0915 07:06:56.838937    1064 api_server.go:204] freezer state: "THAWED"
	I0915 07:06:56.839031    1064 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50520/healthz ...
	I0915 07:06:56.852569    1064 api_server.go:279] https://127.0.0.1:50520/healthz returned 200:
	ok
	I0915 07:06:56.852569    1064 status.go:422] ha-062300 apiserver status = Running (err=<nil>)
	I0915 07:06:56.852569    1064 status.go:257] ha-062300 status: &{Name:ha-062300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:56.852569    1064 status.go:255] checking status of ha-062300-m02 ...
	I0915 07:06:56.872727    1064 cli_runner.go:164] Run: docker container inspect ha-062300-m02 --format={{.State.Status}}
	I0915 07:06:56.957214    1064 status.go:330] ha-062300-m02 host status = "Stopped" (err=<nil>)
	I0915 07:06:56.957385    1064 status.go:343] host is not running, skipping remaining checks
	I0915 07:06:56.957385    1064 status.go:257] ha-062300-m02 status: &{Name:ha-062300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:56.957520    1064 status.go:255] checking status of ha-062300-m03 ...
	I0915 07:06:56.976005    1064 cli_runner.go:164] Run: docker container inspect ha-062300-m03 --format={{.State.Status}}
	I0915 07:06:57.055374    1064 status.go:330] ha-062300-m03 host status = "Running" (err=<nil>)
	I0915 07:06:57.055374    1064 host.go:66] Checking if "ha-062300-m03" exists ...
	I0915 07:06:57.064990    1064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-062300-m03
	I0915 07:06:57.148121    1064 host.go:66] Checking if "ha-062300-m03" exists ...
	I0915 07:06:57.164925    1064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:57.172510    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-062300-m03
	I0915 07:06:57.245882    1064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50674 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-062300-m03\id_rsa Username:docker}
	I0915 07:06:57.393667    1064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:57.429848    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-062300
	I0915 07:06:57.510018    1064 kubeconfig.go:125] found "ha-062300" server: "https://127.0.0.1:50520"
	I0915 07:06:57.510018    1064 api_server.go:166] Checking apiserver status ...
	I0915 07:06:57.522010    1064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:06:57.562107    1064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2323/cgroup
	I0915 07:06:57.583762    1064 api_server.go:182] apiserver freezer: "7:freezer:/docker/da03112279b458db924586995159530c398fa2fab17101d47fca2b1f6287fd21/kubepods/burstable/podc322d95c667035fa2cf19138ade0019a/b120eee6abb2628e97d74e5d13eb0e86e93b662c597a41e9f139f8a6be435dce"
	I0915 07:06:57.595750    1064 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/da03112279b458db924586995159530c398fa2fab17101d47fca2b1f6287fd21/kubepods/burstable/podc322d95c667035fa2cf19138ade0019a/b120eee6abb2628e97d74e5d13eb0e86e93b662c597a41e9f139f8a6be435dce/freezer.state
	I0915 07:06:57.617044    1064 api_server.go:204] freezer state: "THAWED"
	I0915 07:06:57.617044    1064 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50520/healthz ...
	I0915 07:06:57.628864    1064 api_server.go:279] https://127.0.0.1:50520/healthz returned 200:
	ok
	I0915 07:06:57.628864    1064 status.go:422] ha-062300-m03 apiserver status = Running (err=<nil>)
	I0915 07:06:57.628864    1064 status.go:257] ha-062300-m03 status: &{Name:ha-062300-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:57.628864    1064 status.go:255] checking status of ha-062300-m04 ...
	I0915 07:06:57.644983    1064 cli_runner.go:164] Run: docker container inspect ha-062300-m04 --format={{.State.Status}}
	I0915 07:06:57.719705    1064 status.go:330] ha-062300-m04 host status = "Running" (err=<nil>)
	I0915 07:06:57.719705    1064 host.go:66] Checking if "ha-062300-m04" exists ...
	I0915 07:06:57.727720    1064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-062300-m04
	I0915 07:06:57.804743    1064 host.go:66] Checking if "ha-062300-m04" exists ...
	I0915 07:06:57.818474    1064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:57.825314    1064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-062300-m04
	I0915 07:06:57.912106    1064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50835 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\ha-062300-m04\id_rsa Username:docker}
	I0915 07:06:58.070805    1064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:58.097738    1064 status.go:257] ha-062300-m04 status: &{Name:ha-062300-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.7286692s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (153.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 node start m02 -v=7 --alsologtostderr
E0915 07:07:00.589421    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:08:22.512724    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 node start m02 -v=7 --alsologtostderr: (2m30.217692s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.8215961s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (153.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2688375s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-062300 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-062300 -v=7 --alsologtostderr
E0915 07:09:37.799820    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-062300 -v=7 --alsologtostderr: (38.6395112s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-062300 --wait=true -v=7 --alsologtostderr
E0915 07:10:38.642888    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:11:00.890951    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:11:06.356889    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-062300 --wait=true -v=7 --alsologtostderr: (3m22.058712s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-062300
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 node delete m03 -v=7 --alsologtostderr: (14.0976328s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.1228435s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5779233s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 stop -v=7 --alsologtostderr: (36.1393653s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: exit status 7 (513.4094ms)

                                                
                                                
-- stdout --
	ha-062300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-062300-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-062300-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:14:31.367129    6624 out.go:345] Setting OutFile to fd 1072 ...
	I0915 07:14:31.442228    6624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:14:31.442349    6624 out.go:358] Setting ErrFile to fd 1676...
	I0915 07:14:31.442349    6624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:14:31.454844    6624 out.go:352] Setting JSON to false
	I0915 07:14:31.454844    6624 mustload.go:65] Loading cluster: ha-062300
	I0915 07:14:31.454844    6624 notify.go:220] Checking for updates...
	I0915 07:14:31.455880    6624 config.go:182] Loaded profile config "ha-062300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:14:31.455880    6624 status.go:255] checking status of ha-062300 ...
	I0915 07:14:31.474342    6624 cli_runner.go:164] Run: docker container inspect ha-062300 --format={{.State.Status}}
	I0915 07:14:31.554756    6624 status.go:330] ha-062300 host status = "Stopped" (err=<nil>)
	I0915 07:14:31.554895    6624 status.go:343] host is not running, skipping remaining checks
	I0915 07:14:31.554980    6624 status.go:257] ha-062300 status: &{Name:ha-062300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:14:31.555026    6624 status.go:255] checking status of ha-062300-m02 ...
	I0915 07:14:31.573607    6624 cli_runner.go:164] Run: docker container inspect ha-062300-m02 --format={{.State.Status}}
	I0915 07:14:31.651377    6624 status.go:330] ha-062300-m02 host status = "Stopped" (err=<nil>)
	I0915 07:14:31.651377    6624 status.go:343] host is not running, skipping remaining checks
	I0915 07:14:31.651377    6624 status.go:257] ha-062300-m02 status: &{Name:ha-062300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:14:31.651377    6624 status.go:255] checking status of ha-062300-m04 ...
	I0915 07:14:31.667914    6624 cli_runner.go:164] Run: docker container inspect ha-062300-m04 --format={{.State.Status}}
	I0915 07:14:31.742807    6624 status.go:330] ha-062300-m04 host status = "Stopped" (err=<nil>)
	I0915 07:14:31.743943    6624 status.go:343] host is not running, skipping remaining checks
	I0915 07:14:31.743943    6624 status.go:257] ha-062300-m04 status: &{Name:ha-062300-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-062300 --wait=true -v=7 --alsologtostderr --driver=docker
E0915 07:14:37.802596    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:15:38.645475    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-062300 --wait=true -v=7 --alsologtostderr --driver=docker: (1m47.0537714s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.2161335s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (109.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.593375s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-062300 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-062300 --control-plane -v=7 --alsologtostderr: (1m13.3899579s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062300 status -v=7 --alsologtostderr: (2.9952898s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.2452596s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.25s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (64.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-119300 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-119300 --driver=docker: (1m4.4011065s)
--- PASS: TestImageBuild/serial/Setup (64.40s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-119300
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-119300: (5.8201055s)
--- PASS: TestImageBuild/serial/NormalBuild (5.82s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-119300
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-119300: (2.4987138s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-119300
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-119300: (1.6365375s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.64s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-119300
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-119300: (1.7191999s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-379900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0915 07:19:37.806038    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 07:20:38.648322    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-379900 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m35.7546881s)
--- PASS: TestJSONOutput/start/Command (95.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-379900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-379900 --output=json --user=testUser: (1.5502122s)
--- PASS: TestJSONOutput/pause/Command (1.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.26s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-379900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-379900 --output=json --user=testUser: (1.2611038s)
--- PASS: TestJSONOutput/unpause/Command (1.26s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-379900 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-379900 --output=json --user=testUser: (12.5858024s)
--- PASS: TestJSONOutput/stop/Command (12.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.99s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-743600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-743600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (288.0297ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a4ca78fa-bf21-4fdd-8115-7e8ad30c08eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-743600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3b77ab98-edfa-4ed4-b36d-35a042947840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"d8ffb4bf-08f6-4fb0-a8fb-ce58124d7217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"32deb2a9-55a4-489e-9b61-793c14863254","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"f4ad98f4-316f-49b0-b5c3-8c39b7d0d5c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"2e18d03c-44ac-490e-af75-71be6d883ffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ba7a85e5-f5e1-427e-a910-8a5bae0b32ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-743600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-743600
--- PASS: TestErrorJSONOutput (0.99s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (82.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-761300 --network=
E0915 07:22:01.725382    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-761300 --network=: (1m18.7171077s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-761300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-761300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-761300: (3.6579215s)
--- PASS: TestKicCustomNetwork/create_custom_network (82.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (70.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-603200 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-603200 --network=bridge: (1m6.7267082s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-603200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-603200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-603200: (3.6942162s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (70.51s)

                                                
                                    
x
+
TestKicExistingNetwork (71.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-944000 --network=existing-network
E0915 07:24:37.809521    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-944000 --network=existing-network: (1m6.964526s)
helpers_test.go:175: Cleaning up "existing-network-944000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-944000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-944000: (3.4506485s)
--- PASS: TestKicExistingNetwork (71.20s)

                                                
                                    
x
+
TestKicCustomSubnet (70.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-946300 --subnet=192.168.60.0/24
E0915 07:25:38.650611    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-946300 --subnet=192.168.60.0/24: (1m6.6364075s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-946300 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-946300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-946300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-946300: (4.0426516s)
--- PASS: TestKicCustomSubnet (70.77s)

                                                
                                    
x
+
TestKicStaticIP (72.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-849600 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-849600 --static-ip=192.168.200.200: (1m8.141186s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-849600 ip
helpers_test.go:175: Cleaning up "static-ip-849600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-849600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-849600: (3.8463053s)
--- PASS: TestKicStaticIP (72.44s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (142.33s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-307100 --driver=docker
E0915 07:27:40.903482    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-307100 --driver=docker: (1m5.4665264s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-307100 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-307100 --driver=docker: (1m3.4066694s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-307100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.8738884s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-307100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.6848284s)
helpers_test.go:175: Cleaning up "second-307100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-307100
E0915 07:29:37.810742    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-307100: (4.2982347s)
helpers_test.go:175: Cleaning up "first-307100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-307100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-307100: (4.9085877s)
--- PASS: TestMinikubeProfile (142.33s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-480000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-480000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (18.5400941s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.81s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-480000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (17.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-480000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-480000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (16.4718461s)
--- PASS: TestMountStart/serial/StartWithMountSecond (17.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-480000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.78s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.79s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-480000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-480000 --alsologtostderr -v=5: (2.7863458s)
--- PASS: TestMountStart/serial/DeleteFirst (2.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-480000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.75s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-480000
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-480000: (2.0326456s)
--- PASS: TestMountStart/serial/Stop (2.03s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (12.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-480000
E0915 07:30:38.654391    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-480000: (11.9444868s)
--- PASS: TestMountStart/serial/RestartStopped (12.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-480000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (149.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m27.9701083s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: (1.6307518s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (149.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- rollout status deployment/busybox: (30.0438888s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- nslookup kubernetes.io: (1.6749374s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- nslookup kubernetes.io: (1.5294623s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.96s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pc5bg -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-609900 -- exec busybox-7dff88458-pmgz9 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-609900 -v 3 --alsologtostderr
E0915 07:34:37.814018    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-609900 -v 3 --alsologtostderr: (48.9333207s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: (2.0873735s)
--- PASS: TestMultiNode/serial/AddNode (51.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-609900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.0082379s)
--- PASS: TestMultiNode/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (26.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status --output json --alsologtostderr: (1.8190357s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp testdata\cp-test.txt multinode-609900:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3061226422\001\cp-test_multinode-609900.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900:/home/docker/cp-test.txt multinode-609900-m02:/home/docker/cp-test_multinode-609900_multinode-609900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900:/home/docker/cp-test.txt multinode-609900-m02:/home/docker/cp-test_multinode-609900_multinode-609900-m02.txt: (1.1108499s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test_multinode-609900_multinode-609900-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900:/home/docker/cp-test.txt multinode-609900-m03:/home/docker/cp-test_multinode-609900_multinode-609900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900:/home/docker/cp-test.txt multinode-609900-m03:/home/docker/cp-test_multinode-609900_multinode-609900-m03.txt: (1.0671826s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test_multinode-609900_multinode-609900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp testdata\cp-test.txt multinode-609900-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3061226422\001\cp-test_multinode-609900-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m02:/home/docker/cp-test.txt multinode-609900:/home/docker/cp-test_multinode-609900-m02_multinode-609900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m02:/home/docker/cp-test.txt multinode-609900:/home/docker/cp-test_multinode-609900-m02_multinode-609900.txt: (1.0934628s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test_multinode-609900-m02_multinode-609900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m02:/home/docker/cp-test.txt multinode-609900-m03:/home/docker/cp-test_multinode-609900-m02_multinode-609900-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m02:/home/docker/cp-test.txt multinode-609900-m03:/home/docker/cp-test_multinode-609900-m02_multinode-609900-m03.txt: (1.1237121s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test_multinode-609900-m02_multinode-609900-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp testdata\cp-test.txt multinode-609900-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube2\AppData\Local\Temp\TestMultiNodeserialCopyFile3061226422\001\cp-test_multinode-609900-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m03:/home/docker/cp-test.txt multinode-609900:/home/docker/cp-test_multinode-609900-m03_multinode-609900.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m03:/home/docker/cp-test.txt multinode-609900:/home/docker/cp-test_multinode-609900-m03_multinode-609900.txt: (1.146191s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900 "sudo cat /home/docker/cp-test_multinode-609900-m03_multinode-609900.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m03:/home/docker/cp-test.txt multinode-609900-m02:/home/docker/cp-test_multinode-609900-m03_multinode-609900-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 cp multinode-609900-m03:/home/docker/cp-test.txt multinode-609900-m02:/home/docker/cp-test_multinode-609900-m03_multinode-609900-m02.txt: (1.1034342s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 ssh -n multinode-609900-m02 "sudo cat /home/docker/cp-test_multinode-609900-m03_multinode-609900-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (26.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 node stop m03: (1.9312756s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-609900 status: exit status 7 (1.5157093s)

                                                
                                                
-- stdout --
	multinode-609900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-609900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-609900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: exit status 7 (1.4360708s)

                                                
                                                
-- stdout --
	multinode-609900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-609900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-609900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:35:18.201103    4352 out.go:345] Setting OutFile to fd 1176 ...
	I0915 07:35:18.279140    4352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:18.279140    4352 out.go:358] Setting ErrFile to fd 1536...
	I0915 07:35:18.279140    4352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:35:18.292861    4352 out.go:352] Setting JSON to false
	I0915 07:35:18.293394    4352 mustload.go:65] Loading cluster: multinode-609900
	I0915 07:35:18.293490    4352 notify.go:220] Checking for updates...
	I0915 07:35:18.294205    4352 config.go:182] Loaded profile config "multinode-609900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:35:18.294205    4352 status.go:255] checking status of multinode-609900 ...
	I0915 07:35:18.310317    4352 cli_runner.go:164] Run: docker container inspect multinode-609900 --format={{.State.Status}}
	I0915 07:35:18.382041    4352 status.go:330] multinode-609900 host status = "Running" (err=<nil>)
	I0915 07:35:18.382041    4352 host.go:66] Checking if "multinode-609900" exists ...
	I0915 07:35:18.390043    4352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-609900
	I0915 07:35:18.457048    4352 host.go:66] Checking if "multinode-609900" exists ...
	I0915 07:35:18.469101    4352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:35:18.476392    4352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-609900
	I0915 07:35:18.555331    4352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52440 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-609900\id_rsa Username:docker}
	I0915 07:35:18.690232    4352 ssh_runner.go:195] Run: systemctl --version
	I0915 07:35:18.719089    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:35:18.752101    4352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-609900
	I0915 07:35:18.833691    4352 kubeconfig.go:125] found "multinode-609900" server: "https://127.0.0.1:52439"
	I0915 07:35:18.833730    4352 api_server.go:166] Checking apiserver status ...
	I0915 07:35:18.845891    4352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:35:18.883642    4352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2469/cgroup
	I0915 07:35:18.909029    4352 api_server.go:182] apiserver freezer: "7:freezer:/docker/8c038d3e5c67184d2aea568ce5714f6888f440098b6f3ec42185b92bce42a459/kubepods/burstable/pode9e09543206426b375bc40c92b37ed24/db882f9275557793b17c6b3f9f627bfe6d2d5cf3cdc08b658351edd63eefd768"
	I0915 07:35:18.924561    4352 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c038d3e5c67184d2aea568ce5714f6888f440098b6f3ec42185b92bce42a459/kubepods/burstable/pode9e09543206426b375bc40c92b37ed24/db882f9275557793b17c6b3f9f627bfe6d2d5cf3cdc08b658351edd63eefd768/freezer.state
	I0915 07:35:18.943073    4352 api_server.go:204] freezer state: "THAWED"
	I0915 07:35:18.943073    4352 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:52439/healthz ...
	I0915 07:35:18.954088    4352 api_server.go:279] https://127.0.0.1:52439/healthz returned 200:
	ok
	I0915 07:35:18.954088    4352 status.go:422] multinode-609900 apiserver status = Running (err=<nil>)
	I0915 07:35:18.954088    4352 status.go:257] multinode-609900 status: &{Name:multinode-609900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:35:18.954088    4352 status.go:255] checking status of multinode-609900-m02 ...
	I0915 07:35:18.969365    4352 cli_runner.go:164] Run: docker container inspect multinode-609900-m02 --format={{.State.Status}}
	I0915 07:35:19.047092    4352 status.go:330] multinode-609900-m02 host status = "Running" (err=<nil>)
	I0915 07:35:19.047195    4352 host.go:66] Checking if "multinode-609900-m02" exists ...
	I0915 07:35:19.055960    4352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-609900-m02
	I0915 07:35:19.133240    4352 host.go:66] Checking if "multinode-609900-m02" exists ...
	I0915 07:35:19.147038    4352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:35:19.153700    4352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-609900-m02
	I0915 07:35:19.226679    4352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:52521 SSHKeyPath:C:\Users\jenkins.minikube2\minikube-integration\.minikube\machines\multinode-609900-m02\id_rsa Username:docker}
	I0915 07:35:19.381968    4352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:35:19.406449    4352 status.go:257] multinode-609900-m02 status: &{Name:multinode-609900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:35:19.406449    4352 status.go:255] checking status of multinode-609900-m03 ...
	I0915 07:35:19.420718    4352 cli_runner.go:164] Run: docker container inspect multinode-609900-m03 --format={{.State.Status}}
	I0915 07:35:19.498188    4352 status.go:330] multinode-609900-m03 host status = "Stopped" (err=<nil>)
	I0915 07:35:19.498188    4352 status.go:343] host is not running, skipping remaining checks
	I0915 07:35:19.498188    4352 status.go:257] multinode-609900-m03 status: &{Name:multinode-609900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (18.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 node start m03 -v=7 --alsologtostderr: (16.5733672s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status -v=7 --alsologtostderr: (1.8614737s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (18.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (117.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-609900
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-609900
E0915 07:35:38.656130    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-609900: (25.2327252s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true -v=8 --alsologtostderr: (1m31.5433746s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-609900
--- PASS: TestMultiNode/serial/RestartKeepsNodes (117.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 node delete m03: (8.211241s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: (1.5466183s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 stop: (23.6279334s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-609900 status: exit status 7 (411.4611ms)

                                                
                                                
-- stdout --
	multinode-609900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-609900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: exit status 7 (424.6735ms)

                                                
                                                
-- stdout --
	multinode-609900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-609900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:38:09.760005     764 out.go:345] Setting OutFile to fd 1456 ...
	I0915 07:38:09.852288     764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:38:09.852288     764 out.go:358] Setting ErrFile to fd 1032...
	I0915 07:38:09.852288     764 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:38:09.864251     764 out.go:352] Setting JSON to false
	I0915 07:38:09.864251     764 mustload.go:65] Loading cluster: multinode-609900
	I0915 07:38:09.864251     764 notify.go:220] Checking for updates...
	I0915 07:38:09.868067     764 config.go:182] Loaded profile config "multinode-609900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0915 07:38:09.868067     764 status.go:255] checking status of multinode-609900 ...
	I0915 07:38:09.884325     764 cli_runner.go:164] Run: docker container inspect multinode-609900 --format={{.State.Status}}
	I0915 07:38:09.965595     764 status.go:330] multinode-609900 host status = "Stopped" (err=<nil>)
	I0915 07:38:09.965633     764 status.go:343] host is not running, skipping remaining checks
	I0915 07:38:09.965633     764 status.go:257] multinode-609900 status: &{Name:multinode-609900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:38:09.965666     764 status.go:255] checking status of multinode-609900-m02 ...
	I0915 07:38:09.981717     764 cli_runner.go:164] Run: docker container inspect multinode-609900-m02 --format={{.State.Status}}
	I0915 07:38:10.069818     764 status.go:330] multinode-609900-m02 host status = "Stopped" (err=<nil>)
	I0915 07:38:10.069818     764 status.go:343] host is not running, skipping remaining checks
	I0915 07:38:10.071818     764 status.go:257] multinode-609900-m02 status: &{Name:multinode-609900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true -v=8 --alsologtostderr --driver=docker
E0915 07:38:41.737200    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-609900 --wait=true -v=8 --alsologtostderr --driver=docker: (52.6709512s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-609900 status --alsologtostderr: (1.5220395s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (64.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-609900
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-609900-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-609900-m02 --driver=docker: exit status 14 (320.989ms)

                                                
                                                
-- stdout --
	* [multinode-609900-m02] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-609900-m02' is duplicated with machine name 'multinode-609900-m02' in profile 'multinode-609900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-609900-m03 --driver=docker
E0915 07:39:37.816442    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-609900-m03 --driver=docker: (58.37039s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-609900
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-609900: exit status 80 (863.6761ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-609900 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-609900-m03 already exists in multinode-609900-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube_node_e3f75f9fdd712fd5423563a6a11e787bf6359068_1.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-609900-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-609900-m03: (4.6702336s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (64.48s)

                                                
                                    
x
+
TestPreload (178.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-760400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0915 07:40:38.658987    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-760400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m1.1351133s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-760400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-760400 image pull gcr.io/k8s-minikube/busybox: (2.2668804s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-760400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-760400: (11.9587596s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-760400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-760400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (38.4890477s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-760400 image list
helpers_test.go:175: Cleaning up "test-preload-760400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-760400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-760400: (4.0821946s)
--- PASS: TestPreload (178.57s)

                                                
                                    
x
+
TestScheduledStopWindows (134.2s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-797600 --memory=2048 --driver=docker
E0915 07:44:20.915565    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-797600 --memory=2048 --driver=docker: (1m5.3425616s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-797600 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-797600 --schedule 5m: (1.4079103s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-797600 -n scheduled-stop-797600
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-797600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-797600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-797600 --schedule 5s: (1.869467s)
E0915 07:44:37.819822    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-797600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-797600: exit status 7 (325.6676ms)

                                                
                                                
-- stdout --
	scheduled-stop-797600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-797600 -n scheduled-stop-797600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-797600 -n scheduled-stop-797600: exit status 7 (324.2949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-797600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-797600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-797600: (3.0385697s)
--- PASS: TestScheduledStopWindows (134.20s)

                                                
                                    
x
+
TestInsufficientStorage (43.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-854300 --memory=2048 --output=json --wait=true --driver=docker
E0915 07:45:38.662378    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-854300 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (38.2752176s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a16265a-3321-42b5-887a-f1c28160b236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-854300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f889c86-d27c-43fe-8225-388651dfdb1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube2\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"a77ada67-a0d9-40ff-8276-5b0b8352bf32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c7764a1-7c88-47c5-be9d-41df44c4f41b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"b787e1ba-b9e4-409c-a490-04e01e224426","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"5e5543a9-7d04-43e3-b828-6aa46111c89e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed2f14bb-afec-43b3-84e1-fddb0b213ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ee261bdd-9692-4abe-8f50-bca6e7ac0c43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f4abda84-a022-4ee4-b5a9-b2fd336873f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8981c4e8-0713-4795-bf8d-fb59764ee404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"da1e9de4-64c8-4b79-ae11-b374bdfeae2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-854300\" primary control-plane node in \"insufficient-storage-854300\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ded6c62-754e-4b0e-895c-c808a6de4944","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"19b995cc-bbab-46d2-a736-3f958728428b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3cdada6-9da2-4c56-8ab4-5980b2960b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-854300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-854300 --output=json --layout=cluster: exit status 7 (817.2168ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-854300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-854300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:46:09.659184    3484 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-854300" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-854300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-854300 --output=json --layout=cluster: exit status 7 (834.8296ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-854300","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-854300","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:46:10.494103    4780 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-854300" does not appear in C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	E0915 07:46:10.531782    4780 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\insufficient-storage-854300\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-854300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-854300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-854300: (3.1189133s)
--- PASS: TestInsufficientStorage (43.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.585044067.exe start -p running-upgrade-247900 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.585044067.exe start -p running-upgrade-247900 --memory=2200 --vm-driver=docker: (1m27.6530438s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-247900 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-247900 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m9.7388427s)
helpers_test.go:175: Cleaning up "running-upgrade-247900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-247900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-247900: (4.8442376s)
--- PASS: TestRunningBinaryUpgrade (223.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (518.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (2m9.3547557s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-065100
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-065100: (15.9420998s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-065100 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-065100 status --format={{.Host}}: exit status 7 (393.5096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (5m27.8633277s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-065100 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (290.6972ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-065100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-065100
	    minikube start -p kubernetes-upgrade-065100 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0651002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-065100 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-065100 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (38.8054364s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-065100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-065100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-065100: (5.2358912s)
--- PASS: TestKubernetesUpgrade (518.05s)

                                                
                                    
x
+
TestMissingContainerUpgrade (261.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.2815184272.exe start -p missing-upgrade-632600 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.2815184272.exe start -p missing-upgrade-632600 --memory=2200 --driver=docker: (1m32.897185s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-632600
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-632600: (25.082595s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-632600
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-632600 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-632600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m17.1815691s)
helpers_test.go:175: Cleaning up "missing-upgrade-632600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-632600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-632600: (4.6047577s)
--- PASS: TestMissingContainerUpgrade (261.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (384.7774ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-516600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube2\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube2\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.38s)

                                                
                                    
x
+
TestPause/serial/Start (142.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-516600 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-516600 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m22.6089468s)
--- PASS: TestPause/serial/Start (142.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --driver=docker: (1m42.6478284s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-516600 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-516600 status -o json: (2.0114302s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (321.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3965750553.exe start -p stopped-upgrade-516600 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3965750553.exe start -p stopped-upgrade-516600 --memory=2200 --vm-driver=docker: (3m43.3688017s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3965750553.exe -p stopped-upgrade-516600 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube2\AppData\Local\Temp\minikube-v1.26.0.3965750553.exe -p stopped-upgrade-516600 stop: (14.220643s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-516600 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0915 07:50:38.665293    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-516600 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m23.6316031s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (321.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --driver=docker: (23.5006482s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-516600 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-516600 status -o json: exit status 2 (1.0450598s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-516600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-516600
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-516600: (4.0993919s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --no-kubernetes --driver=docker: (31.9569397s)
--- PASS: TestNoKubernetes/serial/Start (31.96s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-516600 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-516600 --alsologtostderr -v=1 --driver=docker: (51.9358135s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-516600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-516600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (788.5658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.6458106s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.7156568s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-516600
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-516600: (2.386095s)
--- PASS: TestNoKubernetes/serial/Stop (2.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (13.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-516600 --driver=docker: (13.5729558s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (13.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-516600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-516600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (792.1257ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.79s)

                                                
                                    
x
+
TestPause/serial/Pause (1.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-516600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-516600 --alsologtostderr -v=5: (1.5911178s)
--- PASS: TestPause/serial/Pause (1.59s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-516600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-516600 --output=json --layout=cluster: exit status 2 (1.0189279s)

                                                
                                                
-- stdout --
	{"Name":"pause-516600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-516600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (1.02s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-516600 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-516600 --alsologtostderr -v=5: (1.5656387s)
--- PASS: TestPause/serial/Unpause (1.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-516600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-516600 --alsologtostderr -v=5: (2.0970912s)
--- PASS: TestPause/serial/PauseAgain (2.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.57s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-516600 --alsologtostderr -v=5
E0915 07:49:37.822539    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-516600 --alsologtostderr -v=5: (5.5728279s)
--- PASS: TestPause/serial/DeletePaused (5.57s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (9.82s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.5382128s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-516600
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-516600: exit status 1 (82.0812ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-516600: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (9.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-516600
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-516600: (3.1250422s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (217.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-837200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0915 07:55:21.749512    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-837200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (3m37.2768362s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (217.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (120.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-789100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
E0915 07:55:38.667239    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-789100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (2m0.7464228s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (120.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-789100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f3c8aba9-962a-4933-b987-9a3a58082e46] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f3c8aba9-962a-4933-b987-9a3a58082e46] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.0099735s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-789100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-789100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-789100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.261672s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-789100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-789100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-789100 --alsologtostderr -v=3: (12.2719435s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-789100 -n no-preload-789100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-789100 -n no-preload-789100: exit status 7 (348.9332ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-789100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (320.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-789100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-789100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (5m19.0740992s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-789100 -n no-preload-789100
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (320.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (130.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-465600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-465600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (2m10.674865s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (130.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-837200 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context old-k8s-version-837200 create -f testdata\busybox.yaml: (1.1623499s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [43b4553f-338f-49a4-884d-4615ebbc810f] Pending
helpers_test.go:344: "busybox" [43b4553f-338f-49a4-884d-4615ebbc810f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [43b4553f-338f-49a4-884d-4615ebbc810f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 13.0106073s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-837200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (14.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-039900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-039900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (1m37.867351s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (97.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-837200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-837200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1128144s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-837200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-837200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-837200 --alsologtostderr -v=3: (16.6117085s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-837200 -n old-k8s-version-837200: exit status 7 (404.2798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-837200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (404.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-837200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0915 07:59:37.828272    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-837200 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (6m43.650894s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-837200 -n old-k8s-version-837200: (1.0952368s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (404.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-039900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a8615a37-b3b7-49e1-b63a-9525f5b29e49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a8615a37-b3b7-49e1-b63a-9525f5b29e49] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0079103s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-039900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-039900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-039900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.9225742s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-039900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-039900 --alsologtostderr -v=3
E0915 08:00:38.670251    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-039900 --alsologtostderr -v=3: (13.742157s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-465600 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [099e65a4-e354-4bd3-8fa3-66d111645284] Pending
helpers_test.go:344: "busybox" [099e65a4-e354-4bd3-8fa3-66d111645284] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [099e65a4-e354-4bd3-8fa3-66d111645284] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.0083519s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-465600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900: exit status 7 (367.2421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-039900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-039900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-039900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (4m40.3142152s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-465600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-465600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.9991972s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-465600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-465600 --alsologtostderr -v=3
E0915 08:01:00.927171    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-465600 --alsologtostderr -v=3: (12.5158351s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-465600 -n embed-certs-465600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-465600 -n embed-certs-465600: exit status 7 (360.9588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-465600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-465600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-465600 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (4m57.672759s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-465600 -n embed-certs-465600
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6c8kb" [b821dc43-2e09-43ee-ad01-24c97fba41db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0071817s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6c8kb" [b821dc43-2e09-43ee-ad01-24c97fba41db] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0123331s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-789100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-789100 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-789100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-789100 --alsologtostderr -v=1: (1.5589338s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-789100 -n no-preload-789100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-789100 -n no-preload-789100: exit status 2 (936.8549ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-789100 -n no-preload-789100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-789100 -n no-preload-789100: exit status 2 (922.8992ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-789100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-789100 --alsologtostderr -v=1: (1.4590988s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-789100 -n no-preload-789100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-789100 -n no-preload-789100: (1.5859472s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-789100 -n no-preload-789100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-789100 -n no-preload-789100: (1.0143465s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (7.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-876700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
E0915 08:04:37.831512    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-876700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (1m9.2540783s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-876700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-876700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.1594858s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-876700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-876700 --alsologtostderr -v=3: (9.7769326s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-876700 -n newest-cni-876700: exit status 7 (362.4943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-876700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-876700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-876700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (31.3226763s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-876700 -n newest-cni-876700: (1.1221202s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mq9dp" [9b6ee819-b2f7-43a9-afcf-0e7106285b08] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0160254s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mq9dp" [9b6ee819-b2f7-43a9-afcf-0e7106285b08] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011177s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-039900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-039900 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-039900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-039900 --alsologtostderr -v=1: (1.7375985s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900: exit status 2 (1.0048225s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900: exit status 2 (1.0131303s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-039900 --alsologtostderr -v=1
E0915 08:05:38.672967    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\functional-804700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-039900 --alsologtostderr -v=1: (1.474061s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900: (1.3600089s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-039900 -n default-k8s-diff-port-039900: (1.1335795s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-876700 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-876700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-876700 --alsologtostderr -v=1: (2.2336716s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-876700 -n newest-cni-876700: exit status 2 (1.1990108s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-876700 -n newest-cni-876700: exit status 2 (1.2889236s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-876700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-876700 --alsologtostderr -v=1: (2.1889241s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-876700 -n newest-cni-876700: (2.1178516s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-876700 -n newest-cni-876700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-876700 -n newest-cni-876700: (1.5120336s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (10.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (120.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (2m0.6501751s)
--- PASS: TestNetworkPlugins/group/auto/Start (120.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wp7v2" [5592665a-bdd2-4c98-b8c4-d1164403fb69] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0102615s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wp7v2" [5592665a-bdd2-4c98-b8c4-d1164403fb69] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.1295376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-837200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Done: kubectl --context old-k8s-version-837200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: (1.1118993s)
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (121.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m1.5626148s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (121.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-svdzp" [dded786a-a79b-48ce-a587-6676414313d2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0108014s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-837200 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-837200 image list --format=json: (1.0908619s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-837200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-837200 --alsologtostderr -v=1: (2.3235817s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-837200 -n old-k8s-version-837200: exit status 2 (962.0518ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-837200 -n old-k8s-version-837200: exit status 2 (922.3606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-837200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-837200 --alsologtostderr -v=1: (1.5707684s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-837200 -n old-k8s-version-837200: (1.6004152s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-837200 -n old-k8s-version-837200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-837200 -n old-k8s-version-837200: (1.1682776s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (8.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-svdzp" [dded786a-a79b-48ce-a587-6676414313d2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0100424s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-465600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-465600 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (9.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-465600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-465600 --alsologtostderr -v=1: (1.8954874s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-465600 -n embed-certs-465600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-465600 -n embed-certs-465600: exit status 2 (945.6831ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-465600 -n embed-certs-465600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-465600 -n embed-certs-465600: exit status 2 (941.2123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-465600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-465600 --alsologtostderr -v=1: (1.4570644s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-465600 -n embed-certs-465600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-465600 -n embed-certs-465600: (2.869846s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-465600 -n embed-certs-465600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-465600 -n embed-certs-465600: (1.0234925s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (9.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (170.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m50.028539s)
--- PASS: TestNetworkPlugins/group/calico/Start (170.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (103.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
E0915 08:07:36.287550    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.295549    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.307683    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.330331    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.373224    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.455878    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.617864    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:36.940269    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:37.582784    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:38.865284    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:41.428039    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:07:46.551079    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m43.1715472s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (103.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (21.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fld7z" [89d84ddb-9e21-44af-8e43-2da06a18b1e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 08:07:56.793502    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fld7z" [89d84ddb-9e21-44af-8e43-2da06a18b1e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 20.0322457s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (21.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q75qr" [5f3ed28e-a301-422e-97c4-303fb808514a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0122263s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7svc9" [feb5952f-a83a-464d-b456-1172e7d6c672] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7svc9" [feb5952f-a83a-464d-b456-1172e7d6c672] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 20.0071397s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (19.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5xlw8" [6b079563-0469-4a23-bc04-d6be3b3a6cd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5xlw8" [6b079563-0469-4a23-bc04-d6be3b3a6cd3] Running
E0915 08:08:36.952500    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:36.960491    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:36.973506    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:36.995508    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:37.037495    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:37.120193    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:37.281766    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:37.603171    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:08:38.246376    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 18.0097644s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (19.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-760400 exec deployment/netcat -- nslookup kubernetes.default
E0915 08:08:39.527875    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (123.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (2m3.1855017s)
--- PASS: TestNetworkPlugins/group/false/Start (123.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xdfbr" [73c7461b-208f-413c-bb3c-3f597a08f410] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0099899s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (27.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context calico-760400 replace --force -f testdata\netcat-deployment.yaml: (3.2792679s)
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hnncw" [de1c4a31-c530-4175-bec5-e329ae1e7c5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hnncw" [de1c4a31-c530-4175-bec5-e329ae1e7c5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 23.1789903s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (27.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (111.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m51.8715371s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (111.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (107.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
E0915 08:09:37.833849    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\addons-291300\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m47.4470097s)
--- PASS: TestNetworkPlugins/group/flannel/Start (107.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (109.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
E0915 08:10:57.438235    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-039900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m49.6004258s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (109.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (18.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nb4n2" [5c43a9ee-3d72-4338-ae58-a0ee71c28aee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nb4n2" [5c43a9ee-3d72-4338-ae58-a0ee71c28aee] Running
E0915 08:11:21.251134    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 18.0093535s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (18.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-h225p" [fbfad2b7-4f6d-4240-a590-9e929fc05957] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0119253s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lchpp" [9950771a-62ed-40ac-939c-ef5170492491] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lchpp" [9950771a-62ed-40ac-939c-ef5170492491] Running
E0915 08:11:38.400833    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-039900\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.0098073s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-760400 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-760400 "pgrep -a kubelet": (1.0050263s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (20.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6zc92" [037af864-2536-4f97-9ddf-a25cbf93cc84] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6zc92" [037af864-2536-4f97-9ddf-a25cbf93cc84] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 20.0123599s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (20.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-760400 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m35.6488549s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (20.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kubenet-760400 replace --force -f testdata\netcat-deployment.yaml: (1.0867529s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lgwxz" [8258287a-22fb-42a1-8d6a-b488e413f2eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 08:12:36.290858    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\no-preload-789100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-lgwxz" [8258287a-22fb-42a1-8d6a-b488e413f2eb] Running
E0915 08:12:51.416202    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.422618    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.435165    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.456747    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.498239    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.580093    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:51.742168    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:52.063625    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:52.706194    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0915 08:12:53.988196    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\auto-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.3606145s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (20.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-760400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (16.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-760400 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-whxsc" [035a600d-026c-41e1-9b05-0fb5c381bb86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 08:14:02.068116    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\custom-flannel-760400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-whxsc" [035a600d-026c-41e1-9b05-0fb5c381bb86] Running
E0915 08:14:05.095638    8584 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube2\\minikube-integration\\.minikube\\profiles\\old-k8s-version-837200\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 16.0104421s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (16.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-760400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-760400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.31s)

                                                
                                    

Test skip (24/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-291300 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-291300 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-291300 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:247: (dbg) Done: kubectl --context addons-291300 replace --force -f testdata\nginx-pod-svc.yaml: (1.0761901s)
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [50c1239c-d7f2-456b-840a-009b53b3a338] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [50c1239c-d7f2-456b-840a-009b53b3a338] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.0077247s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-291300 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (18.69s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-804700 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-804700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 1756: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-804700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-804700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-lz5sw" [4956f306-3c0d-45f6-bda9-d427392a2e75] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-lz5sw" [4956f306-3c0d-45f6-bda9-d427392a2e75] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.0075026s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-773900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-773900
--- SKIP: TestStartStop/group/disable-driver-mounts (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (16.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-760400 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-760400" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube2\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:50:51 GMT
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://127.0.0.1:54066
name: missing-upgrade-632600
contexts:
- context:
cluster: missing-upgrade-632600
extensions:
- extension:
last-update: Sun, 15 Sep 2024 07:50:51 GMT
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-632600
name: missing-upgrade-632600
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-632600
user:
client-certificate: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\missing-upgrade-632600\client.crt
client-key: C:\Users\jenkins.minikube2\minikube-integration\.minikube\profiles\missing-upgrade-632600\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-760400

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-760400" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-760400"

                                                
                                                
----------------------- debugLogs end: cilium-760400 [took: 16.0496031s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-760400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-760400
--- SKIP: TestNetworkPlugins/group/cilium (16.98s)

                                                
                                    
Copied to clipboard