Test Report: Docker_Windows 19264

                    
                      9e9f0a1e532281828d0abd077e39f9c759354b34:2024-07-17:35371
                    
                

Test fail (5/348)

Order failed test Duration
39 TestAddons/parallel/Ingress 491.79
65 TestErrorSpam/setup 71.15
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 6.99
96 TestFunctional/parallel/ConfigCmd 2.05
330 TestStartStop/group/old-k8s-version/serial/SecondStart 446.52
x
+
TestAddons/parallel/Ingress (491.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-285600 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-285600 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-285600 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:247: (dbg) Non-zero exit: kubectl --context addons-285600 replace --force -f testdata\nginx-pod-svc.yaml: exit status 1 (1.7913502s)

                                                
                                                
-- stdout --
	service/nginx replaced

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": service "volcano-admission-service" not found

                                                
                                                
** /stderr **
addons_test.go:249: failed to kubectl replace nginx-pod-svc. args "kubectl --context addons-285600 replace --force -f testdata\\nginx-pod-svc.yaml". exit status 1
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-285600 -n addons-285600
addons_test.go:252: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-285600 -n addons-285600: (1.3616315s)
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-07-17 00:40:09.4543688 +0000 UTC m=+1196.037716901
addons_test.go:253: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-285600
helpers_test.go:235: (dbg) docker inspect addons-285600:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b",
	        "Created": "2024-07-17T00:23:55.071692657Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 904,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:23:56.427055375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b90fcd82d9a0f97666ccbedd0bec36ffa6ae451ed5f5fff480c00361af0818c6",
	        "ResolvConfPath": "/var/lib/docker/containers/83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b/hostname",
	        "HostsPath": "/var/lib/docker/containers/83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b/hosts",
	        "LogPath": "/var/lib/docker/containers/83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b/83efe2150ede5a5590fd3d86688a33ae3a35e61c12e69a69fba67d88ac5ca12b-json.log",
	        "Name": "/addons-285600",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-285600:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-285600",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f2dfb3a58b1ae1c326f6dd9f63c65c7e98149d6781fa9a642a4b736d6895e8ae-init/diff:/var/lib/docker/overlay2/6088a4728183ef5756e13b25ed8f3f4eadd6ab8d4c2088bd541d2084f39281eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f2dfb3a58b1ae1c326f6dd9f63c65c7e98149d6781fa9a642a4b736d6895e8ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f2dfb3a58b1ae1c326f6dd9f63c65c7e98149d6781fa9a642a4b736d6895e8ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f2dfb3a58b1ae1c326f6dd9f63c65c7e98149d6781fa9a642a4b736d6895e8ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-285600",
	                "Source": "/var/lib/docker/volumes/addons-285600/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-285600",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-285600",
	                "name.minikube.sigs.k8s.io": "addons-285600",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "063dc366d697187e144fe575259c73b8c8d93506dff66a6bac2196d39dc36433",
	            "SandboxKey": "/var/run/docker/netns/063dc366d697",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62189"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62190"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62187"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62188"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-285600": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "82a3ff406106e0821f464b57b24c34b1157b309ddbc6e97d3d23db5b6adf3c5d",
	                    "EndpointID": "cd273e9262e5234f9e047ab3c7a1b23828cdd5a90513687a8a3065ef07954043",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-285600",
	                        "83efe2150ede"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-285600 -n addons-285600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-285600 -n addons-285600: (1.3509893s)
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 logs -n 25: (2.84342s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p download-only-168500                                                                     | download-only-168500   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| start   | -o=json --download-only                                                                     | download-only-469400   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-469400                                                                     |                        |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                                         |                        |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-469400                                                                     | download-only-469400   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-528400                                                                     | download-only-528400   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-168500                                                                     | download-only-168500   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-469400                                                                     | download-only-469400   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| start   | --download-only -p                                                                          | download-docker-879300 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | download-docker-879300                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p download-docker-879300                                                                   | download-docker-879300 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC | 17 Jul 24 00:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-561300   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |                     |
	|         | binary-mirror-561300                                                                        |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |                   |         |                     |                     |
	|         | http://127.0.0.1:62154                                                                      |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p binary-mirror-561300                                                                     | binary-mirror-561300   | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC | 17 Jul 24 00:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |                     |
	|         | addons-285600                                                                               |                        |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC |                     |
	|         | addons-285600                                                                               |                        |                   |         |                     |                     |
	| start   | -p addons-285600 --wait=true                                                                | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:21 UTC | 17 Jul 24 00:31 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                        |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |                   |         |                     |                     |
	|         | --driver=docker --addons=ingress                                                            |                        |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:31 UTC | 17 Jul 24 00:31 UTC |
	|         | -p addons-285600                                                                            |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:31 UTC | 17 Jul 24 00:31 UTC |
	|         | -p addons-285600                                                                            |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons disable                                                                | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:31 UTC | 17 Jul 24 00:31 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:31 UTC | 17 Jul 24 00:31 UTC |
	|         | addons-285600                                                                               |                        |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:31 UTC | 17 Jul 24 00:31 UTC |
	|         | addons-285600                                                                               |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons                                                                        | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:32 UTC |
	|         | disable metrics-server                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons disable                                                                | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:32 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |                   |         |                     |                     |
	| ssh     | addons-285600 ssh cat                                                                       | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:32 UTC |
	|         | /opt/local-path-provisioner/pvc-bf222cc7-b83d-40d5-a3e3-6b40029f896b_default_test-pvc/file1 |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons disable                                                                | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:33 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons                                                                        | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:32 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-285600 addons                                                                        | addons-285600          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:32 UTC | 17 Jul 24 00:32 UTC |
	|         | disable volumesnapshots                                                                     |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:21:06
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:21:06.341226   15320 out.go:291] Setting OutFile to fd 984 ...
	I0717 00:21:06.341979   15320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:21:06.341979   15320 out.go:304] Setting ErrFile to fd 988...
	I0717 00:21:06.341979   15320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:21:06.365368   15320 out.go:298] Setting JSON to false
	I0717 00:21:06.368780   15320 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7681,"bootTime":1721167984,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:21:06.368780   15320 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:21:06.373980   15320 out.go:177] * [addons-285600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:21:06.378396   15320 notify.go:220] Checking for updates...
	I0717 00:21:06.379402   15320 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:21:06.383876   15320 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:21:06.384786   15320 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:21:06.386784   15320 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:21:06.389127   15320 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:21:06.395931   15320 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:21:06.677862   15320 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:21:06.694682   15320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:21:07.040458   15320 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:21:07.000520604 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:21:07.044388   15320 out.go:177] * Using the docker driver based on user configuration
	I0717 00:21:07.049189   15320 start.go:297] selected driver: docker
	I0717 00:21:07.049189   15320 start.go:901] validating driver "docker" against <nil>
	I0717 00:21:07.049189   15320 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:21:07.109948   15320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:21:07.491240   15320 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:21:07.43508219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:21:07.491374   15320 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:21:07.493568   15320 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:21:07.504884   15320 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 00:21:07.516345   15320 cni.go:84] Creating CNI manager for ""
	I0717 00:21:07.516345   15320 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:21:07.516345   15320 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:21:07.518728   15320 start.go:340] cluster config:
	{Name:addons-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:21:07.522344   15320 out.go:177] * Starting "addons-285600" primary control-plane node in "addons-285600" cluster
	I0717 00:21:07.522609   15320 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 00:21:07.528019   15320 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 00:21:07.532046   15320 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:21:07.532090   15320 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 00:21:07.532270   15320 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 00:21:07.532306   15320 cache.go:56] Caching tarball of preloaded images
	I0717 00:21:07.532417   15320 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 00:21:07.532971   15320 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 00:21:07.533865   15320 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\config.json ...
	I0717 00:21:07.533865   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\config.json: {Name:mk2409eee495977b0b2d7fc0b14530eb6deb39e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:21:07.708742   15320 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:21:07.708742   15320 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:21:07.708742   15320 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:21:07.709460   15320 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 00:21:07.709664   15320 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 00:21:07.709731   15320 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 00:21:07.709731   15320 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 00:21:07.709731   15320 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 00:21:07.709731   15320 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:22:23.753825   15320 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 00:22:23.753825   15320 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:22:23.753825   15320 start.go:360] acquireMachinesLock for addons-285600: {Name:mkaeee13e406436c5053fa9b62d0bbcfa90a676d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:22:23.754632   15320 start.go:364] duration metric: took 252.3µs to acquireMachinesLock for "addons-285600"
	I0717 00:22:23.754632   15320 start.go:93] Provisioning new machine with config: &{Name:addons-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-285600 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 00:22:23.754632   15320 start.go:125] createHost starting for "" (driver="docker")
	I0717 00:22:23.760398   15320 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 00:22:23.761434   15320 start.go:159] libmachine.API.Create for "addons-285600" (driver="docker")
	I0717 00:22:23.761692   15320 client.go:168] LocalClient.Create starting
	I0717 00:22:23.763334   15320 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0717 00:22:23.933345   15320 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0717 00:22:24.187858   15320 cli_runner.go:164] Run: docker network inspect addons-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 00:22:24.397011   15320 cli_runner.go:211] docker network inspect addons-285600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 00:22:24.410487   15320 network_create.go:284] running [docker network inspect addons-285600] to gather additional debugging logs...
	I0717 00:22:24.410487   15320 cli_runner.go:164] Run: docker network inspect addons-285600
	W0717 00:22:24.594734   15320 cli_runner.go:211] docker network inspect addons-285600 returned with exit code 1
	I0717 00:22:24.594734   15320 network_create.go:287] error running [docker network inspect addons-285600]: docker network inspect addons-285600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-285600 not found
	I0717 00:22:24.594734   15320 network_create.go:289] output of [docker network inspect addons-285600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-285600 not found
	
	** /stderr **
	I0717 00:22:24.607278   15320 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 00:22:24.842553   15320 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d09ce0}
	I0717 00:22:24.843177   15320 network_create.go:124] attempt to create docker network addons-285600 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 00:22:24.848375   15320 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-285600 addons-285600
	I0717 00:22:27.257365   15320 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-285600 addons-285600: (2.4088698s)
	I0717 00:22:27.257440   15320 network_create.go:108] docker network addons-285600 192.168.49.0/24 created
	I0717 00:22:27.257440   15320 kic.go:121] calculated static IP "192.168.49.2" for the "addons-285600" container
	I0717 00:22:27.275411   15320 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 00:22:27.475914   15320 cli_runner.go:164] Run: docker volume create addons-285600 --label name.minikube.sigs.k8s.io=addons-285600 --label created_by.minikube.sigs.k8s.io=true
	I0717 00:22:27.660449   15320 oci.go:103] Successfully created a docker volume addons-285600
	I0717 00:22:27.672778   15320 cli_runner.go:164] Run: docker run --rm --name addons-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-285600 --entrypoint /usr/bin/test -v addons-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib
	I0717 00:23:09.805320   15320 cli_runner.go:217] Completed: docker run --rm --name addons-285600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-285600 --entrypoint /usr/bin/test -v addons-285600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib: (42.1322184s)
	I0717 00:23:09.805320   15320 oci.go:107] Successfully prepared a docker volume addons-285600
	I0717 00:23:09.805320   15320 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:23:09.805320   15320 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 00:23:09.824311   15320 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-285600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 00:23:53.875772   15320 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-285600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir: (44.0511216s)
	I0717 00:23:53.875772   15320 kic.go:203] duration metric: took 44.0701127s to extract preloaded images to volume ...
	I0717 00:23:53.891769   15320 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:23:54.358006   15320 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:84 SystemTime:2024-07-17 00:23:54.286017443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:23:54.373012   15320 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 00:23:54.844852   15320 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-285600 --name addons-285600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-285600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-285600 --network addons-285600 --ip 192.168.49.2 --volume addons-285600:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e
	I0717 00:23:56.510609   15320 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-285600 --name addons-285600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-285600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-285600 --network addons-285600 --ip 192.168.49.2 --volume addons-285600:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e: (1.6657447s)
	I0717 00:23:56.529626   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Running}}
	I0717 00:23:56.910393   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:23:57.279292   15320 cli_runner.go:164] Run: docker exec addons-285600 stat /var/lib/dpkg/alternatives/iptables
	I0717 00:23:57.865963   15320 oci.go:144] the created container "addons-285600" has a running status.
	I0717 00:23:57.865963   15320 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa...
	I0717 00:23:58.152691   15320 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 00:23:58.409408   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:23:58.620605   15320 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 00:23:58.620605   15320 kic_runner.go:114] Args: [docker exec --privileged addons-285600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 00:23:59.174827   15320 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa...
	I0717 00:24:03.188269   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:03.485981   15320 machine.go:94] provisionDockerMachine start ...
	I0717 00:24:03.499984   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:03.788011   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:03.800014   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:03.800014   15320 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:24:03.999747   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-285600
	
	I0717 00:24:03.999747   15320 ubuntu.go:169] provisioning hostname "addons-285600"
	I0717 00:24:04.014776   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:04.286677   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:04.287670   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:04.287670   15320 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-285600 && echo "addons-285600" | sudo tee /etc/hostname
	I0717 00:24:04.541224   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-285600
	
	I0717 00:24:04.556239   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:04.878240   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:04.879229   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:04.879229   15320 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-285600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-285600/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-285600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:24:05.121629   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:24:05.121629   15320 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0717 00:24:05.121629   15320 ubuntu.go:177] setting up certificates
	I0717 00:24:05.121629   15320 provision.go:84] configureAuth start
	I0717 00:24:05.139633   15320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-285600
	I0717 00:24:05.457624   15320 provision.go:143] copyHostCerts
	I0717 00:24:05.457624   15320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0717 00:24:05.460636   15320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0717 00:24:05.462635   15320 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0717 00:24:05.463636   15320 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-285600 san=[127.0.0.1 192.168.49.2 addons-285600 localhost minikube]
	I0717 00:24:05.632218   15320 provision.go:177] copyRemoteCerts
	I0717 00:24:05.655701   15320 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:24:05.673332   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:05.980026   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:06.126980   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:24:06.180175   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0717 00:24:06.241410   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 00:24:06.292093   15320 provision.go:87] duration metric: took 1.1704548s to configureAuth
	I0717 00:24:06.292659   15320 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:24:06.293293   15320 config.go:182] Loaded profile config "addons-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:24:06.306040   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:06.516065   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:06.517835   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:06.517835   15320 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 00:24:06.748576   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 00:24:06.748576   15320 ubuntu.go:71] root file system type: overlay
	I0717 00:24:06.749587   15320 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 00:24:06.765580   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:07.101240   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:07.102310   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:07.102310   15320 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 00:24:07.388234   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 00:24:07.405902   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:07.692371   15320 main.go:141] libmachine: Using SSH client type: native
	I0717 00:24:07.693382   15320 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 62189 <nil> <nil>}
	I0717 00:24:07.693382   15320 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 00:24:11.973796   15320 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-06-29 00:00:53.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-07-17 00:24:07.368631596 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 00:24:11.973850   15320 machine.go:97] duration metric: took 8.4878023s to provisionDockerMachine
	I0717 00:24:11.973850   15320 client.go:171] duration metric: took 1m48.2113224s to LocalClient.Create
	I0717 00:24:11.973889   15320 start.go:167] duration metric: took 1m48.2116755s to libmachine.API.Create "addons-285600"
	I0717 00:24:11.974010   15320 start.go:293] postStartSetup for "addons-285600" (driver="docker")
	I0717 00:24:11.974099   15320 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:24:11.987680   15320 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:24:11.996602   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:12.200329   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:12.347972   15320 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:24:12.355783   15320 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:24:12.355783   15320 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:24:12.355783   15320 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:24:12.355783   15320 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:24:12.355783   15320 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0717 00:24:12.356779   15320 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0717 00:24:12.356779   15320 start.go:296] duration metric: took 382.7098ms for postStartSetup
	I0717 00:24:12.370778   15320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-285600
	I0717 00:24:12.543063   15320 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\config.json ...
	I0717 00:24:12.559059   15320 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:24:12.569062   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:12.745569   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:12.880968   15320 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:24:12.893899   15320 start.go:128] duration metric: took 1m49.1384251s to createHost
	I0717 00:24:12.893899   15320 start.go:83] releasing machines lock for "addons-285600", held for 1m49.1384251s
	I0717 00:24:12.903345   15320 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-285600
	I0717 00:24:13.091248   15320 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0717 00:24:13.101003   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:13.103006   15320 ssh_runner.go:195] Run: cat /version.json
	I0717 00:24:13.115040   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:13.291357   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:13.306358   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:13.452129   15320 ssh_runner.go:195] Run: systemctl --version
	W0717 00:24:13.452129   15320 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0717 00:24:13.474138   15320 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:24:13.502433   15320 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0717 00:24:13.523645   15320 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0717 00:24:13.538720   15320 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:24:13.613653   15320 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 00:24:13.614653   15320 start.go:495] detecting cgroup driver to use...
	I0717 00:24:13.614653   15320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:24:13.614653   15320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0717 00:24:13.624972   15320 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0717 00:24:13.624972   15320 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0717 00:24:13.667951   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 00:24:13.707323   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 00:24:13.732550   15320 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 00:24:13.745423   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 00:24:13.785947   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 00:24:13.823844   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 00:24:13.862544   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 00:24:13.895538   15320 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:24:13.930259   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 00:24:13.963642   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 00:24:14.001693   15320 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 00:24:14.039126   15320 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:24:14.074290   15320 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:24:14.106447   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:14.290881   15320 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 00:24:14.495142   15320 start.go:495] detecting cgroup driver to use...
	I0717 00:24:14.495142   15320 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:24:14.510585   15320 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 00:24:14.536614   15320 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0717 00:24:14.551632   15320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 00:24:14.579398   15320 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:24:14.629808   15320 ssh_runner.go:195] Run: which cri-dockerd
	I0717 00:24:14.654367   15320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 00:24:14.679621   15320 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 00:24:14.732331   15320 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 00:24:14.928506   15320 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 00:24:15.107152   15320 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 00:24:15.107430   15320 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 00:24:15.159335   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:15.325138   15320 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 00:24:16.000012   15320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 00:24:16.034463   15320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 00:24:16.071851   15320 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 00:24:16.246261   15320 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 00:24:16.408433   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:16.567773   15320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 00:24:16.607728   15320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 00:24:16.647495   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:16.808297   15320 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 00:24:16.945572   15320 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 00:24:16.959946   15320 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 00:24:16.972673   15320 start.go:563] Will wait 60s for crictl version
	I0717 00:24:16.987392   15320 ssh_runner.go:195] Run: which crictl
	I0717 00:24:17.014789   15320 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:24:17.082186   15320 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 00:24:17.094784   15320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 00:24:17.161917   15320 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 00:24:17.222763   15320 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 00:24:17.233482   15320 cli_runner.go:164] Run: docker exec -t addons-285600 dig +short host.docker.internal
	I0717 00:24:17.513774   15320 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 00:24:17.527466   15320 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 00:24:17.541395   15320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:17.572983   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:17.747145   15320 kubeadm.go:883] updating cluster {Name:addons-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:24:17.747196   15320 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:24:17.758789   15320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 00:24:17.806173   15320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 00:24:17.806237   15320 docker.go:615] Images already preloaded, skipping extraction
	I0717 00:24:17.822698   15320 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 00:24:17.865943   15320 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 00:24:17.865943   15320 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:24:17.865943   15320 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.2 docker true true} ...
	I0717 00:24:17.866948   15320 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-285600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:addons-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:24:17.876934   15320 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 00:24:17.968426   15320 cni.go:84] Creating CNI manager for ""
	I0717 00:24:17.968426   15320 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:24:17.968426   15320 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:24:17.968426   15320 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-285600 NodeName:addons-285600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:24:17.968426   15320 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-285600"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:24:17.983424   15320 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:24:18.001433   15320 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:24:18.010422   15320 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:24:18.034899   15320 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0717 00:24:18.069901   15320 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:24:18.106000   15320 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0717 00:24:18.158705   15320 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:24:18.171548   15320 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 00:24:18.207568   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:18.378927   15320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:24:18.411670   15320 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600 for IP: 192.168.49.2
	I0717 00:24:18.411670   15320 certs.go:194] generating shared ca certs ...
	I0717 00:24:18.411670   15320 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:18.412212   15320 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0717 00:24:18.698116   15320 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt ...
	I0717 00:24:18.699116   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt: {Name:mk1d1f25727e6fcaf35d7d74de783ad2d2c6be81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:18.699922   15320 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key ...
	I0717 00:24:18.699922   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key: {Name:mkffeaed7182692572a4aaea1f77b60f45c78854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:18.701186   15320 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0717 00:24:18.787399   15320 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0717 00:24:18.787399   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkc09bedb222360a1dcc92648b423932b0197d96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:18.788398   15320 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key ...
	I0717 00:24:18.788398   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk23d29d7cc073007c63c291d9cf6fa322998d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:18.789946   15320 certs.go:256] generating profile certs ...
	I0717 00:24:18.790502   15320 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.key
	I0717 00:24:18.791026   15320 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt with IP's: []
	I0717 00:24:19.018909   15320 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt ...
	I0717 00:24:19.019923   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: {Name:mk2f3c4a66e089c5d252fd68b02823820a405ca0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.020914   15320 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.key ...
	I0717 00:24:19.020914   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.key: {Name:mka773367748ba9204183a8a13d2e319b541b592 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.021930   15320 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key.43f6ec6d
	I0717 00:24:19.022440   15320 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt.43f6ec6d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0717 00:24:19.439591   15320 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt.43f6ec6d ...
	I0717 00:24:19.439591   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt.43f6ec6d: {Name:mkd9d93ff0ba53290ee0f32b99d8ec626e0b56ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.440589   15320 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key.43f6ec6d ...
	I0717 00:24:19.440589   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key.43f6ec6d: {Name:mk2b79286e85f8e9fc85ac8378485480274ef3ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.441728   15320 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt.43f6ec6d -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt
	I0717 00:24:19.452724   15320 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key.43f6ec6d -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key
	I0717 00:24:19.453723   15320 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.key
	I0717 00:24:19.454301   15320 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.crt with IP's: []
	I0717 00:24:19.592014   15320 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.crt ...
	I0717 00:24:19.592014   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.crt: {Name:mk6934e662932696e5caa945aedb3fd2a7e35fc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.592799   15320 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.key ...
	I0717 00:24:19.592799   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.key: {Name:mka425256f5e36ef6988a435bde230dded6f10a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:19.604799   15320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0717 00:24:19.605843   15320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0717 00:24:19.606075   15320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0717 00:24:19.606401   15320 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0717 00:24:19.607793   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:24:19.653095   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 00:24:19.695263   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:24:19.738775   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:24:19.786832   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0717 00:24:19.835516   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 00:24:19.880771   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:24:19.923557   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:24:19.969874   15320 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:24:20.012023   15320 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:24:20.060020   15320 ssh_runner.go:195] Run: openssl version
	I0717 00:24:20.085876   15320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:24:20.121136   15320 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:20.129136   15320 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:20.142134   15320 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:24:20.168142   15320 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:24:20.204894   15320 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:24:20.214940   15320 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 00:24:20.215803   15320 kubeadm.go:392] StartCluster: {Name:addons-285600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:addons-285600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:24:20.225299   15320 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 00:24:20.275730   15320 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:24:20.309838   15320 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 00:24:20.329643   15320 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 00:24:20.341197   15320 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 00:24:20.357901   15320 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 00:24:20.357901   15320 kubeadm.go:157] found existing configuration files:
	
	I0717 00:24:20.368904   15320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 00:24:20.388696   15320 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 00:24:20.403474   15320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 00:24:20.435625   15320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 00:24:20.456326   15320 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 00:24:20.471398   15320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 00:24:20.505936   15320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 00:24:20.526909   15320 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 00:24:20.538889   15320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 00:24:20.571379   15320 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 00:24:20.590922   15320 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 00:24:20.602931   15320 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 00:24:20.623978   15320 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 00:24:20.764534   15320 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only; the NodeSwap feature gate of the kubelet is beta but disabled by default
	I0717 00:24:20.943044   15320 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 00:24:35.404388   15320 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
	I0717 00:24:35.405401   15320 kubeadm.go:310] [preflight] Running pre-flight checks
	I0717 00:24:35.405401   15320 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 00:24:35.405401   15320 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 00:24:35.406293   15320 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 00:24:35.406616   15320 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 00:24:35.411573   15320 out.go:204]   - Generating certificates and keys ...
	I0717 00:24:35.411904   15320 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0717 00:24:35.412130   15320 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0717 00:24:35.412307   15320 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 00:24:35.412496   15320 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0717 00:24:35.412723   15320 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0717 00:24:35.412944   15320 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0717 00:24:35.413112   15320 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0717 00:24:35.413678   15320 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-285600 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:24:35.413798   15320 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0717 00:24:35.414219   15320 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-285600 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 00:24:35.414401   15320 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 00:24:35.414401   15320 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 00:24:35.414401   15320 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0717 00:24:35.414946   15320 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 00:24:35.415089   15320 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 00:24:35.415327   15320 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0717 00:24:35.415327   15320 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 00:24:35.415327   15320 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 00:24:35.415327   15320 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 00:24:35.415902   15320 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 00:24:35.416093   15320 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 00:24:35.418662   15320 out.go:204]   - Booting up control plane ...
	I0717 00:24:35.419098   15320 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 00:24:35.419453   15320 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 00:24:35.419708   15320 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 00:24:35.420033   15320 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 00:24:35.420237   15320 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 00:24:35.420237   15320 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0717 00:24:35.420237   15320 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0717 00:24:35.420859   15320 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0717 00:24:35.420859   15320 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003241133s
	I0717 00:24:35.420859   15320 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0717 00:24:35.421405   15320 kubeadm.go:310] [api-check] The API server is healthy after 8.002266926s
	I0717 00:24:35.421680   15320 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 00:24:35.421680   15320 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 00:24:35.422291   15320 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 00:24:35.422291   15320 kubeadm.go:310] [mark-control-plane] Marking the node addons-285600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 00:24:35.422291   15320 kubeadm.go:310] [bootstrap-token] Using token: 9xwjni.kc6v7hrliukrcnkx
	I0717 00:24:35.426253   15320 out.go:204]   - Configuring RBAC rules ...
	I0717 00:24:35.427304   15320 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 00:24:35.427304   15320 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 00:24:35.427304   15320 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 00:24:35.427304   15320 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 00:24:35.428312   15320 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 00:24:35.428312   15320 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 00:24:35.428312   15320 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 00:24:35.428312   15320 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0717 00:24:35.428312   15320 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0717 00:24:35.428312   15320 kubeadm.go:310] 
	I0717 00:24:35.429309   15320 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0717 00:24:35.429309   15320 kubeadm.go:310] 
	I0717 00:24:35.429309   15320 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0717 00:24:35.429309   15320 kubeadm.go:310] 
	I0717 00:24:35.429309   15320 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0717 00:24:35.429309   15320 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 00:24:35.430033   15320 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 00:24:35.430100   15320 kubeadm.go:310] 
	I0717 00:24:35.430160   15320 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0717 00:24:35.430210   15320 kubeadm.go:310] 
	I0717 00:24:35.430237   15320 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 00:24:35.430237   15320 kubeadm.go:310] 
	I0717 00:24:35.430237   15320 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0717 00:24:35.430237   15320 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 00:24:35.430863   15320 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 00:24:35.430863   15320 kubeadm.go:310] 
	I0717 00:24:35.430902   15320 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 00:24:35.431157   15320 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0717 00:24:35.431243   15320 kubeadm.go:310] 
	I0717 00:24:35.431340   15320 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9xwjni.kc6v7hrliukrcnkx \
	I0717 00:24:35.431621   15320 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8953ca8d23690b8b9195405230df0a890e6c99f09f69cee7b16d3b955d7d4a5 \
	I0717 00:24:35.431790   15320 kubeadm.go:310] 	--control-plane 
	I0717 00:24:35.431895   15320 kubeadm.go:310] 
	I0717 00:24:35.432096   15320 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0717 00:24:35.432096   15320 kubeadm.go:310] 
	I0717 00:24:35.432310   15320 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9xwjni.kc6v7hrliukrcnkx \
	I0717 00:24:35.432496   15320 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c8953ca8d23690b8b9195405230df0a890e6c99f09f69cee7b16d3b955d7d4a5 
	I0717 00:24:35.432496   15320 cni.go:84] Creating CNI manager for ""
	I0717 00:24:35.432496   15320 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:24:35.435824   15320 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0717 00:24:35.450499   15320 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0717 00:24:35.472877   15320 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0717 00:24:35.509670   15320 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 00:24:35.526724   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:35.527899   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-285600 minikube.k8s.io/updated_at=2024_07_17T00_24_35_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185 minikube.k8s.io/name=addons-285600 minikube.k8s.io/primary=true
	I0717 00:24:35.529048   15320 ops.go:34] apiserver oom_adj: -16
	I0717 00:24:35.663024   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:36.166868   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:36.669179   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:37.175244   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:37.667277   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:38.170472   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:38.675096   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:39.178699   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:39.666485   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:40.171365   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:40.666783   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:41.170508   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:41.676332   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:42.163160   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:42.667650   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:43.169262   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:43.669740   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:44.178134   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:44.665099   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:45.168919   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:45.675788   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:46.168360   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:46.674068   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:47.178703   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:47.678882   15320 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 00:24:47.807656   15320 kubeadm.go:1113] duration metric: took 12.297821s to wait for elevateKubeSystemPrivileges
	I0717 00:24:47.807992   15320 kubeadm.go:394] duration metric: took 27.5919777s to StartCluster
	I0717 00:24:47.808038   15320 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:47.808405   15320 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:24:47.809462   15320 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:24:47.811189   15320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 00:24:47.811189   15320 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 00:24:47.811189   15320 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0717 00:24:47.811731   15320 addons.go:69] Setting default-storageclass=true in profile "addons-285600"
	I0717 00:24:47.811731   15320 addons.go:69] Setting gcp-auth=true in profile "addons-285600"
	I0717 00:24:47.811805   15320 addons.go:69] Setting ingress-dns=true in profile "addons-285600"
	I0717 00:24:47.811805   15320 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-285600"
	I0717 00:24:47.811919   15320 addons.go:69] Setting registry=true in profile "addons-285600"
	I0717 00:24:47.811950   15320 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-285600"
	I0717 00:24:47.812057   15320 addons.go:69] Setting helm-tiller=true in profile "addons-285600"
	I0717 00:24:47.812102   15320 addons.go:234] Setting addon registry=true in "addons-285600"
	I0717 00:24:47.812102   15320 addons.go:234] Setting addon helm-tiller=true in "addons-285600"
	I0717 00:24:47.812102   15320 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-285600"
	I0717 00:24:47.812194   15320 addons.go:69] Setting volumesnapshots=true in profile "addons-285600"
	I0717 00:24:47.812194   15320 addons.go:69] Setting inspektor-gadget=true in profile "addons-285600"
	I0717 00:24:47.812194   15320 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-285600"
	I0717 00:24:47.812194   15320 config.go:182] Loaded profile config "addons-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:24:47.812194   15320 addons.go:69] Setting storage-provisioner=true in profile "addons-285600"
	I0717 00:24:47.812281   15320 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-285600"
	I0717 00:24:47.812281   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812194   15320 addons.go:69] Setting ingress=true in profile "addons-285600"
	I0717 00:24:47.812352   15320 addons.go:234] Setting addon ingress=true in "addons-285600"
	I0717 00:24:47.811731   15320 addons.go:69] Setting cloud-spanner=true in profile "addons-285600"
	I0717 00:24:47.812529   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812529   15320 addons.go:234] Setting addon cloud-spanner=true in "addons-285600"
	I0717 00:24:47.811869   15320 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-285600"
	I0717 00:24:47.812529   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812529   15320 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-285600"
	I0717 00:24:47.812744   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812281   15320 addons.go:234] Setting addon storage-provisioner=true in "addons-285600"
	I0717 00:24:47.812815   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812194   15320 addons.go:234] Setting addon inspektor-gadget=true in "addons-285600"
	I0717 00:24:47.812815   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812815   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812194   15320 addons.go:234] Setting addon volumesnapshots=true in "addons-285600"
	I0717 00:24:47.812102   15320 mustload.go:65] Loading cluster: addons-285600
	I0717 00:24:47.812102   15320 addons.go:69] Setting volcano=true in profile "addons-285600"
	I0717 00:24:47.813877   15320 addons.go:234] Setting addon volcano=true in "addons-285600"
	I0717 00:24:47.812102   15320 addons.go:234] Setting addon ingress-dns=true in "addons-285600"
	I0717 00:24:47.813877   15320 config.go:182] Loaded profile config "addons-285600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:24:47.813877   15320 out.go:177] * Verifying Kubernetes components...
	I0717 00:24:47.813877   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.812194   15320 addons.go:69] Setting metrics-server=true in profile "addons-285600"
	I0717 00:24:47.813877   15320 addons.go:234] Setting addon metrics-server=true in "addons-285600"
	I0717 00:24:47.812281   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.813877   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.813366   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.811731   15320 addons.go:69] Setting yakd=true in profile "addons-285600"
	I0717 00:24:47.813877   15320 addons.go:234] Setting addon yakd=true in "addons-285600"
	I0717 00:24:47.815194   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.813877   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:47.857440   15320 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:24:47.860460   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.868440   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.872439   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.874440   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.874440   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.875443   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.876911   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.878201   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.879440   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.881659   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.883442   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.899318   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.905927   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.916114   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.919920   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:47.925916   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:48.252146   15320 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0717 00:24:48.257140   15320 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0717 00:24:48.257140   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0717 00:24:48.259192   15320 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 00:24:48.266128   15320 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 00:24:48.266128   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 00:24:48.269144   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:48.281128   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.284154   15320 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-285600"
	I0717 00:24:48.284154   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:48.292133   15320 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:24:48.294129   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.295153   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.295153   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.298164   15320 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:24:48.298164   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:24:48.303155   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 00:24:48.309126   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 00:24:48.318153   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 00:24:48.320132   15320 addons.go:234] Setting addon default-storageclass=true in "addons-285600"
	I0717 00:24:48.320132   15320 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0717 00:24:48.322140   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.323136   15320 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0717 00:24:48.325134   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:48.325134   15320 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0717 00:24:48.325134   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:24:48.333174   15320 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0717 00:24:48.337153   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 00:24:48.343152   15320 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 00:24:48.345139   15320 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 00:24:48.356164   15320 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 00:24:48.347132   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 00:24:48.349139   15320 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0717 00:24:48.352139   15320 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:24:48.354132   15320 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 00:24:48.356164   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 00:24:48.361149   15320 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0717 00:24:48.361149   15320 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 00:24:48.363143   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0717 00:24:48.367135   15320 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0717 00:24:48.370133   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:24:48.373149   15320 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 00:24:48.374148   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.375146   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 00:24:48.377140   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.379175   15320 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0717 00:24:48.380135   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.385124   15320 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:24:48.386147   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.389155   15320 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 00:24:48.392145   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 00:24:48.393525   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 00:24:48.400772   15320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:24:48.409832   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.419120   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.437115   15320 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0717 00:24:48.442122   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 00:24:48.467124   15320 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0717 00:24:48.470130   15320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:24:48.476124   15320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0717 00:24:48.481115   15320 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:24:48.481115   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0717 00:24:48.493134   15320 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 00:24:48.503247   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 00:24:48.503247   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 00:24:48.506804   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.516240   15320 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0717 00:24:48.516240   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0717 00:24:48.531249   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.542250   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.594005   15320 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 00:24:48.615007   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.645026   15320 out.go:177] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 62187 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 00:24:48.653655   15320 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0717 00:24:48.660402   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.667700   15320 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0717 00:24:48.675676   15320 out.go:177]   - Using image docker.io/registry:2.8.3
	I0717 00:24:48.693681   15320 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 00:24:48.693681   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 00:24:48.712683   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.721670   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.748680   15320 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0717 00:24:48.756710   15320 out.go:177]   - Using image docker.io/busybox:stable
	I0717 00:24:48.761689   15320 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:24:48.761689   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0717 00:24:48.790398   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.802608   15320 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:24:48.802608   15320 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:24:48.813606   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.823600   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:48.828594   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.837615   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.849607   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.863606   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.877601   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.905229   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.955249   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.972242   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:48.993255   15320 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1358061s)
	W0717 00:24:48.994232   15320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:24:48.994232   15320 retry.go:31] will retry after 241.467527ms: ssh: handshake failed: EOF
	I0717 00:24:49.015254   15320 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:24:49.037227   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:49.052233   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:24:49.095974   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	W0717 00:24:49.195469   15320 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0717 00:24:49.195748   15320 retry.go:31] will retry after 279.701581ms: ssh: handshake failed: EOF
	I0717 00:24:49.912547   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 00:24:50.096604   15320 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 00:24:50.096604   15320 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 00:24:50.097706   15320 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 00:24:50.097706   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 00:24:50.209888   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 00:24:50.294510   15320 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 00:24:50.294601   15320 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 00:24:50.294701   15320 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 00:24:50.294701   15320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 00:24:50.295741   15320 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0717 00:24:50.295741   15320 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0717 00:24:50.315112   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0717 00:24:50.315778   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:24:50.317060   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0717 00:24:50.492800   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 00:24:50.492800   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 00:24:50.492800   15320 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 00:24:50.492800   15320 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 00:24:50.512052   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 00:24:50.513871   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0717 00:24:50.890656   15320 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 00:24:50.890656   15320 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 00:24:50.894098   15320 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 00:24:50.894098   15320 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 00:24:50.894263   15320 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 00:24:50.894263   15320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 00:24:50.894355   15320 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:24:50.894355   15320 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 00:24:50.894355   15320 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0717 00:24:50.894589   15320 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0717 00:24:50.911005   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:24:50.992026   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 00:24:50.992026   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 00:24:51.192787   15320 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:24:51.192787   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 00:24:51.392705   15320 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0717 00:24:51.392705   15320 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0717 00:24:51.493257   15320 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 00:24:51.493257   15320 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:24:51.493257   15320 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 00:24:51.493257   15320 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 00:24:51.611975   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 00:24:51.694559   15320 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 00:24:51.694733   15320 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 00:24:51.694791   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 00:24:51.694934   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 00:24:51.893278   15320 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:24:51.893447   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0717 00:24:52.011179   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 00:24:52.091140   15320 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 00:24:52.091140   15320 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 00:24:52.111365   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 00:24:52.293841   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 00:24:52.293974   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 00:24:52.393855   15320 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.3785742s)
	I0717 00:24:52.393855   15320 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.7998199s)
	I0717 00:24:52.393855   15320 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0717 00:24:52.408796   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-285600
	I0717 00:24:52.494120   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 00:24:52.494219   15320 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 00:24:52.510723   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0717 00:24:52.595743   15320 node_ready.go:35] waiting up to 6m0s for node "addons-285600" to be "Ready" ...
	I0717 00:24:52.692464   15320 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 00:24:52.692464   15320 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 00:24:52.894862   15320 node_ready.go:49] node "addons-285600" has status "Ready":"True"
	I0717 00:24:52.894981   15320 node_ready.go:38] duration metric: took 299.1786ms for node "addons-285600" to be "Ready" ...
	I0717 00:24:52.894981   15320 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:24:52.993060   15320 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 00:24:52.993060   15320 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 00:24:53.092950   15320 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:24:53.093109   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 00:24:53.191786   15320 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 00:24:53.191786   15320 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 00:24:53.393875   15320 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-285600" context rescaled to 1 replicas
	I0717 00:24:53.489662   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 00:24:53.489662   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 00:24:53.501077   15320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace to be "Ready" ...
	I0717 00:24:53.705818   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:24:53.792199   15320 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:24:53.792340   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0717 00:24:54.290448   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 00:24:54.290670   15320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 00:24:54.409580   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 00:24:54.987810   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 00:24:54.987810   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 00:24:55.691254   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 00:24:55.691254   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 00:24:56.296629   15320 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:24:56.296629   15320 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 00:24:56.498920   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:24:56.806659   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 00:24:57.992722   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.0800194s)
	I0717 00:24:58.394922   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.1848878s)
	I0717 00:24:58.793713   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:01.194151   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:03.488209   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:04.300673   15320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 00:25:04.310458   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:25:04.495132   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:25:05.689750   15320 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 00:25:06.095451   15320 addons.go:234] Setting addon gcp-auth=true in "addons-285600"
	I0717 00:25:06.095926   15320 host.go:66] Checking if "addons-285600" exists ...
	I0717 00:25:06.119672   15320 cli_runner.go:164] Run: docker container inspect addons-285600 --format={{.State.Status}}
	I0717 00:25:06.189827   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:06.315471   15320 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 00:25:06.324523   15320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-285600
	I0717 00:25:06.511169   15320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62189 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\addons-285600\id_rsa Username:docker}
	I0717 00:25:08.394759   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:10.588958   15320 pod_ready.go:102] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"False"
	I0717 00:25:12.686720   15320 pod_ready.go:92] pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:12.686720   15320 pod_ready.go:81] duration metric: took 19.1848881s for pod "coredns-7db6d8ff4d-nf699" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:12.686720   15320 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sm99c" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:13.302440   15320 pod_ready.go:92] pod "coredns-7db6d8ff4d-sm99c" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:13.302440   15320 pod_ready.go:81] duration metric: took 615.7155ms for pod "coredns-7db6d8ff4d-sm99c" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:13.302440   15320 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:13.889343   15320 pod_ready.go:92] pod "etcd-addons-285600" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:13.889343   15320 pod_ready.go:81] duration metric: took 586.8985ms for pod "etcd-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:13.889343   15320 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.287767   15320 pod_ready.go:92] pod "kube-apiserver-addons-285600" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:14.287767   15320 pod_ready.go:81] duration metric: took 398.4204ms for pod "kube-apiserver-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.287767   15320 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.500452   15320 pod_ready.go:92] pod "kube-controller-manager-addons-285600" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:14.500452   15320 pod_ready.go:81] duration metric: took 212.6834ms for pod "kube-controller-manager-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.500452   15320 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-52kxk" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.786301   15320 pod_ready.go:92] pod "kube-proxy-52kxk" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:14.787325   15320 pod_ready.go:81] duration metric: took 285.8472ms for pod "kube-proxy-52kxk" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:14.787325   15320 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:15.086677   15320 pod_ready.go:92] pod "kube-scheduler-addons-285600" in "kube-system" namespace has status "Ready":"True"
	I0717 00:25:15.086790   15320 pod_ready.go:81] duration metric: took 299.3489ms for pod "kube-scheduler-addons-285600" in "kube-system" namespace to be "Ready" ...
	I0717 00:25:15.086821   15320 pod_ready.go:38] duration metric: took 22.1915842s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:25:15.086821   15320 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:25:15.111918   15320 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:25:19.285576   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (28.9680569s)
	I0717 00:25:19.285628   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (28.9702864s)
	I0717 00:25:19.285628   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (28.9696209s)
	I0717 00:25:19.286010   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (28.7737309s)
	I0717 00:25:19.286010   15320 addons.go:475] Verifying addon ingress=true in "addons-285600"
	I0717 00:25:19.286010   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (28.7719117s)
	I0717 00:25:19.287071   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (27.1754124s)
	I0717 00:25:19.287071   15320 addons.go:475] Verifying addon metrics-server=true in "addons-285600"
	I0717 00:25:19.287331   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (26.7763968s)
	I0717 00:25:19.286010   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (28.3747814s)
	I0717 00:25:19.286010   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (27.6738163s)
	I0717 00:25:19.286010   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (27.274616s)
	I0717 00:25:19.287753   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (25.5817327s)
	I0717 00:25:19.287753   15320 addons.go:475] Verifying addon registry=true in "addons-285600"
	W0717 00:25:19.287753   15320 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:19.287753   15320 retry.go:31] will retry after 346.855706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 00:25:19.288072   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (24.878296s)
	I0717 00:25:19.294026   15320 out.go:177] * Verifying registry addon...
	I0717 00:25:19.297079   15320 out.go:177] * Verifying ingress addon...
	I0717 00:25:19.301067   15320 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-285600 service yakd-dashboard -n yakd-dashboard
	
	I0717 00:25:19.308765   15320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 00:25:19.314385   15320 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 00:25:19.392942   15320 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 00:25:19.392942   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:19.393233   15320 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 00:25:19.393289   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0717 00:25:19.587920   15320 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0717 00:25:19.659947   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 00:25:19.992330   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:19.992900   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:20.408323   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:20.408323   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:20.990779   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:21.100563   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:21.587891   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:21.595720   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:21.891041   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:21.892835   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:21.896130   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (25.0892723s)
	I0717 00:25:21.896130   15320 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-285600"
	I0717 00:25:21.896130   15320 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (15.5800029s)
	I0717 00:25:21.896130   15320 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.7841587s)
	I0717 00:25:21.896130   15320 api_server.go:72] duration metric: took 34.084672s to wait for apiserver process to appear ...
	I0717 00:25:21.896130   15320 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:25:21.896780   15320 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62188/healthz ...
	I0717 00:25:21.903115   15320 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 00:25:21.907691   15320 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0717 00:25:21.915220   15320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 00:25:21.921438   15320 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0717 00:25:21.923460   15320 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 00:25:21.923460   15320 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 00:25:21.997284   15320 api_server.go:279] https://127.0.0.1:62188/healthz returned 200:
	ok
	I0717 00:25:21.997284   15320 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 00:25:21.998565   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:22.004700   15320 api_server.go:141] control plane version: v1.30.2
	I0717 00:25:22.004785   15320 api_server.go:131] duration metric: took 108.004ms to wait for apiserver health ...
	I0717 00:25:22.004824   15320 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:25:22.109849   15320 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 00:25:22.109849   15320 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 00:25:22.196647   15320 system_pods.go:59] 19 kube-system pods found
	I0717 00:25:22.196647   15320 system_pods.go:61] "coredns-7db6d8ff4d-nf699" [4a696e39-4ace-4c56-8517-7a5afb315387] Running
	I0717 00:25:22.196647   15320 system_pods.go:61] "coredns-7db6d8ff4d-sm99c" [101163ed-7dfb-48cd-ac5a-796466261988] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0717 00:25:22.196647   15320 system_pods.go:61] "csi-hostpath-attacher-0" [4bc128ad-8e9a-4859-82ec-7ac932043e6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:25:22.196647   15320 system_pods.go:61] "csi-hostpath-resizer-0" [cdcbbdb0-7774-4c57-99fa-7aa6b35d0172] Pending
	I0717 00:25:22.196647   15320 system_pods.go:61] "csi-hostpathplugin-p46j7" [b058fe26-1311-44eb-a947-a329d66a830f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:25:22.196647   15320 system_pods.go:61] "etcd-addons-285600" [cccf6cd7-b733-41f1-9d13-9b0cc44b1e3a] Running
	I0717 00:25:22.196647   15320 system_pods.go:61] "kube-apiserver-addons-285600" [27a7c28f-a604-4476-b750-e17f7cf9729b] Running
	I0717 00:25:22.196647   15320 system_pods.go:61] "kube-controller-manager-addons-285600" [440e4b9e-212a-4de9-a637-3ac5fd2e393c] Running
	I0717 00:25:22.197215   15320 system_pods.go:61] "kube-ingress-dns-minikube" [2ea3a02f-c6d2-4cfb-9aa7-71aa9e54eeee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:25:22.197215   15320 system_pods.go:61] "kube-proxy-52kxk" [66e20999-d337-4bbd-9455-00e05a927337] Running
	I0717 00:25:22.197297   15320 system_pods.go:61] "kube-scheduler-addons-285600" [86efb234-57d0-495c-9261-628a414c1a69] Running
	I0717 00:25:22.197297   15320 system_pods.go:61] "metrics-server-c59844bb4-mr9k5" [f5d4dd4d-71da-4ac8-9570-0b6d36289353] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:25:22.197297   15320 system_pods.go:61] "nvidia-device-plugin-daemonset-bqpr8" [5ca506e8-4ed8-4d01-b876-fc9da45f4226] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0717 00:25:22.197408   15320 system_pods.go:61] "registry-cmvrv" [bde7187f-101b-47a5-8f05-14625aa13089] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:25:22.197449   15320 system_pods.go:61] "registry-proxy-zgk9n" [0cd181eb-4574-4249-b6f0-a750246d67bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:25:22.197449   15320 system_pods.go:61] "snapshot-controller-745499f584-jx2nt" [6dc2641a-8d4d-4b21-8bca-55e71e9d8056] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:25:22.197582   15320 system_pods.go:61] "snapshot-controller-745499f584-w4fbj" [5289f77b-eaea-4663-9358-ac94bb6ab671] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:25:22.197638   15320 system_pods.go:61] "storage-provisioner" [3c1b5296-a8e0-4030-a5a9-5d1dd1676c0a] Running
	I0717 00:25:22.197638   15320 system_pods.go:61] "tiller-deploy-6677d64bcd-94x7l" [efa0aad9-f794-4cd2-96ca-89ef18232474] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:25:22.197681   15320 system_pods.go:74] duration metric: took 192.8558ms to wait for pod list to return data ...
	I0717 00:25:22.197681   15320 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:25:22.284033   15320 default_sa.go:45] found service account: "default"
	I0717 00:25:22.284234   15320 default_sa.go:55] duration metric: took 86.4563ms for default service account to be created ...
	I0717 00:25:22.284234   15320 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:25:22.397448   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:22.398425   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:22.399816   15320 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:22.399912   15320 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0717 00:25:22.402697   15320 system_pods.go:86] 19 kube-system pods found
	I0717 00:25:22.402697   15320 system_pods.go:89] "coredns-7db6d8ff4d-nf699" [4a696e39-4ace-4c56-8517-7a5afb315387] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "coredns-7db6d8ff4d-sm99c" [101163ed-7dfb-48cd-ac5a-796466261988] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I0717 00:25:22.402697   15320 system_pods.go:89] "csi-hostpath-attacher-0" [4bc128ad-8e9a-4859-82ec-7ac932043e6a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 00:25:22.402697   15320 system_pods.go:89] "csi-hostpath-resizer-0" [cdcbbdb0-7774-4c57-99fa-7aa6b35d0172] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 00:25:22.402697   15320 system_pods.go:89] "csi-hostpathplugin-p46j7" [b058fe26-1311-44eb-a947-a329d66a830f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 00:25:22.402697   15320 system_pods.go:89] "etcd-addons-285600" [cccf6cd7-b733-41f1-9d13-9b0cc44b1e3a] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "kube-apiserver-addons-285600" [27a7c28f-a604-4476-b750-e17f7cf9729b] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "kube-controller-manager-addons-285600" [440e4b9e-212a-4de9-a637-3ac5fd2e393c] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "kube-ingress-dns-minikube" [2ea3a02f-c6d2-4cfb-9aa7-71aa9e54eeee] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0717 00:25:22.402697   15320 system_pods.go:89] "kube-proxy-52kxk" [66e20999-d337-4bbd-9455-00e05a927337] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "kube-scheduler-addons-285600" [86efb234-57d0-495c-9261-628a414c1a69] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "metrics-server-c59844bb4-mr9k5" [f5d4dd4d-71da-4ac8-9570-0b6d36289353] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 00:25:22.402697   15320 system_pods.go:89] "nvidia-device-plugin-daemonset-bqpr8" [5ca506e8-4ed8-4d01-b876-fc9da45f4226] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0717 00:25:22.402697   15320 system_pods.go:89] "registry-cmvrv" [bde7187f-101b-47a5-8f05-14625aa13089] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0717 00:25:22.402697   15320 system_pods.go:89] "registry-proxy-zgk9n" [0cd181eb-4574-4249-b6f0-a750246d67bc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 00:25:22.402697   15320 system_pods.go:89] "snapshot-controller-745499f584-jx2nt" [6dc2641a-8d4d-4b21-8bca-55e71e9d8056] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:25:22.402697   15320 system_pods.go:89] "snapshot-controller-745499f584-w4fbj" [5289f77b-eaea-4663-9358-ac94bb6ab671] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0717 00:25:22.402697   15320 system_pods.go:89] "storage-provisioner" [3c1b5296-a8e0-4030-a5a9-5d1dd1676c0a] Running
	I0717 00:25:22.402697   15320 system_pods.go:89] "tiller-deploy-6677d64bcd-94x7l" [efa0aad9-f794-4cd2-96ca-89ef18232474] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0717 00:25:22.402697   15320 system_pods.go:126] duration metric: took 118.4618ms to wait for k8s-apps to be running ...
	I0717 00:25:22.402697   15320 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:25:22.421264   15320 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:25:22.496639   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:22.603443   15320 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 00:25:22.889741   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:22.890632   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:22.993514   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:23.389311   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:23.391304   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:23.495312   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:23.891429   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:23.891925   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:23.996071   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:24.392485   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:24.392485   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:24.493513   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:24.892716   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:24.893055   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:24.994090   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:25.388820   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:25.392142   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:25.496861   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:25.603574   15320 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.1822348s)
	I0717 00:25:25.603574   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.9435796s)
	I0717 00:25:25.603574   15320 system_svc.go:56] duration metric: took 3.2008518s WaitForService to wait for kubelet
	I0717 00:25:25.603574   15320 kubeadm.go:582] duration metric: took 37.7920867s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:25:25.603574   15320 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:25:25.687217   15320 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0717 00:25:25.687217   15320 node_conditions.go:123] node cpu capacity is 16
	I0717 00:25:25.687659   15320 node_conditions.go:105] duration metric: took 84.0844ms to run NodePressure ...
	I0717 00:25:25.687803   15320 start.go:241] waiting for startup goroutines ...
	I0717 00:25:25.890789   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:25.891026   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:26.011120   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:26.397411   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:26.399736   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:26.492232   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:26.494814   15320 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.8913398s)
	I0717 00:25:26.503980   15320 addons.go:475] Verifying addon gcp-auth=true in "addons-285600"
	I0717 00:25:26.506971   15320 out.go:177] * Verifying gcp-auth addon...
	I0717 00:25:26.519326   15320 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 00:25:26.586324   15320 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:25:26.829624   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:26.829624   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:26.939670   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:27.331995   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:27.335961   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:27.429857   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:27.821524   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:27.830385   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:27.931597   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:28.325152   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:28.327973   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:28.440255   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:28.845525   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:28.845781   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:28.935511   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:29.321517   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.331340   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.434858   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:29.831501   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:29.835664   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:29.939516   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:30.330846   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.330846   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.441128   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:30.826424   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:30.832061   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:30.935825   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.325789   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.331753   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.432513   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:31.826296   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:31.826458   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:31.942911   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.331343   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.335801   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.428448   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:32.832792   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:32.833068   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:32.930391   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.323558   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.334984   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.433847   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:33.823809   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:33.824348   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:33.938467   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.331710   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.336456   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.434490   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:34.822445   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:34.829302   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:34.936945   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.389903   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.390959   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.435622   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:35.822439   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:35.830065   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:35.934343   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.335228   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.340547   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.429971   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:36.837228   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:36.840666   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:36.931592   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.332820   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.337945   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.494305   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:37.836077   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:37.836538   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:37.928465   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.322990   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.335035   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.438192   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:38.825579   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:38.826204   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:38.936601   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.329104   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.333228   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.427414   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:39.830171   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:39.833252   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:39.942089   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.333779   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.333779   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.427758   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:40.832222   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:40.832965   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:40.927907   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.330666   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.333482   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.518692   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:41.831275   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:41.831275   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:41.928366   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.347905   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.348067   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.478310   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:42.832930   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:42.832930   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:42.942964   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.330379   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.332335   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.429394   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:43.829672   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:43.830995   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:43.940285   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.326487   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.326851   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.439146   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:44.834315   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:44.834315   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:44.930753   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.323511   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.333534   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.434131   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:45.825608   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:45.825608   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:45.942480   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.332891   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.332891   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.429911   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:46.829777   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:46.829777   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:46.940780   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.334655   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.336665   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.434633   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:47.828695   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:47.832695   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:47.938691   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.387698   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.387698   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.490707   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:48.826377   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:48.826806   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:48.940471   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.331074   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.331808   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.427137   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:49.834297   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:49.837761   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:49.928937   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.321972   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.330370   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.432821   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:50.825408   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:50.828118   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:50.937388   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.325640   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.330518   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.435310   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:51.826697   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:51.826821   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:51.940181   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.354695   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.356612   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.429357   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:52.830144   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:52.831088   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:52.936888   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.327940   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.328174   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.434296   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:53.829201   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:53.832858   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:53.938713   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.337398   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.338386   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.442031   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:54.885310   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:54.892514   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:54.931006   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.323614   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.329608   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.437303   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:55.829942   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:55.829942   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:55.941345   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.320804   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.328826   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.434824   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:56.832130   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:56.832539   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:56.933875   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.325013   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.387173   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.486177   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:57.828063   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:57.828412   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:57.939038   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.329828   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.330931   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.439003   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:58.829758   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:58.831912   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:58.940161   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.332328   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.332328   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.423307   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:25:59.821692   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:25:59.830914   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:25:59.939554   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.324586   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.330138   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.437702   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:00.832005   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:00.835516   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:00.928523   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.320862   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.325868   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.434963   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:01.825272   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:01.826587   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:01.935884   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.330896   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.332814   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.425716   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:02.822432   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:02.830309   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:02.933081   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.328996   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.328996   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.442013   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:03.833057   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:03.833889   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:03.927590   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.325979   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.328871   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.437105   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:04.830306   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:04.831500   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:04.940515   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.331581   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.331703   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.440805   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:05.831982   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:05.832906   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:05.928970   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.321361   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.329719   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.430780   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:06.826973   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:06.827318   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:06.938434   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.328507   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.329743   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.440300   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:07.832033   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:07.832214   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:07.942169   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.334714   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.335069   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.431618   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:08.825011   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:08.826907   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:08.938342   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.331562   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.334205   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.427870   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:09.821834   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:09.830214   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:09.933427   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.326903   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.327304   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.436304   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:10.833201   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:10.834316   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:10.926765   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.321454   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.328965   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.433118   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:11.826205   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:11.826651   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:11.935184   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.330435   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.330806   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.439210   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:12.835279   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:12.836633   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:12.933200   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.324051   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.332718   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.433339   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:13.826384   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:13.832488   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:13.940394   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.333426   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.333650   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.429314   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:14.821915   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:14.831738   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:14.933699   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.325765   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.326076   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.435548   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:15.826256   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:15.826256   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:15.938256   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.329336   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.329875   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.439922   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:16.833326   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:16.834235   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:16.927868   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.322160   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.331024   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.431768   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:17.826631   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:17.835207   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:17.935716   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.327492   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.327492   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.441009   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:18.968309   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:18.970688   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:18.973011   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.450426   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.454784   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.455753   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:19.831452   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:19.832125   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:19.940330   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.327916   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.328169   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.439252   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:20.835277   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:20.836130   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:20.926015   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.461962   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.463456   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.464163   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.477707   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.479074   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:22.512833   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.826988   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:22.833978   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:22.935971   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.389110   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.389110   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.499630   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:23.886604   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:23.887613   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:23.991854   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.328457   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.328457   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.439552   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:24.831875   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:24.835597   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:24.928460   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.324600   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.382462   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.482025   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:25.824571   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:25.825077   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:25.937716   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.324617   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.325447   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.438970   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:26.826435   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:26.827431   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:26.939748   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.335338   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.337211   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:27.431562   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:27.836428   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:27.837473   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.218316   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.322150   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.331424   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.430915   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:28.832165   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:28.834005   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:28.928530   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.336709   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.336709   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.433194   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:29.831183   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:29.832202   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:29.947209   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.337510   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.338378   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.431838   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:30.837449   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:30.838414   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:30.931769   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.324807   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.324807   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.435498   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:31.827107   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:31.830124   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:31.940493   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.325281   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.326936   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.435931   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:32.824691   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:32.831668   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:32.937156   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.334764   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.336003   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.429773   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:33.828333   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:33.828333   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:33.940443   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.329683   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.330713   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.450003   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:34.834668   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:34.837674   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:34.939228   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.383207   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.384239   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.482220   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:35.823624   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:35.824230   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:35.936712   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.327403   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.327945   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.437492   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:36.836076   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:36.836751   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:36.939037   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.330958   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.331165   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.442162   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:37.830715   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:37.831630   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:37.924393   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:38.331259   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:38.332253   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:38.426309   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.006869   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.010230   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.011598   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.332516   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.333765   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.440537   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:39.828768   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:39.830273   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:39.936107   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.337593   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.337681   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.436873   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:40.826950   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:40.826950   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:40.939312   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.332925   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.334928   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.429924   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:41.828018   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:41.831995   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:41.943016   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.333401   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.334399   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.429404   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:42.824296   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:42.824296   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:42.939304   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.336510   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.337500   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.432504   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:43.830261   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:43.830261   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:43.929254   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.328222   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.335210   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.435978   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:44.827771   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:44.830663   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:44.938753   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.326871   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.330914   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.439981   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:45.831164   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:45.834167   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:45.928239   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.332406   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.333267   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.428899   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:46.831293   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:46.831326   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:46.925075   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.334972   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.335217   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.430244   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:47.837635   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:47.837783   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:47.933850   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.328230   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.329204   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.442384   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:48.822126   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:48.830128   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:48.928849   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.336341   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.336341   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.432817   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:49.824160   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:49.831549   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:49.936268   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.329683   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.329683   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.439154   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:50.831781   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:50.831984   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:50.929110   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:51.321726   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.330438   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.432808   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:51.822794   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:51.831152   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:51.933258   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.325702   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.327110   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.435794   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:52.827107   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:52.830401   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:52.939424   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.333888   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.336336   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.431990   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:53.825329   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:53.826489   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:53.936347   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.327681   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.328300   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.438166   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:54.825539   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:54.830197   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:54.936800   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:55.329696   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:55.330191   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:55.445551   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.362803   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:56.363021   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.365379   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.469594   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.474945   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:56.474945   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:56.852059   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:56.852881   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:56.932833   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.322657   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:57.331233   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.435228   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:57.824460   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 00:26:57.835106   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:57.991035   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.403347   15320 kapi.go:107] duration metric: took 1m39.093799s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 00:26:58.404367   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.482795   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:58.832584   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:58.931324   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.336913   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.485306   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:26:59.837517   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:26:59.933651   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.332861   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:00.429006   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:00.831898   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:00.935622   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.336387   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.434584   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:01.827281   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:01.940119   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.380736   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.428739   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:02.832389   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:02.984393   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.325421   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.436418   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:03.827653   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:03.939538   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.338571   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.436513   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:04.830526   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:04.927676   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.338200   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.439197   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:05.828848   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:05.939824   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.331135   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.429753   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:06.836516   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:06.935503   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.324463   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.435762   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:07.829651   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:07.941403   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.329924   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.441716   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:08.835143   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:08.931712   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.336881   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.433946   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:09.838667   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:09.936535   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.330575   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.439155   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:10.824873   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:10.935849   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.336835   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.430660   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:11.835074   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:11.932794   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.333807   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.429855   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:12.838491   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:12.980510   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.330675   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.429672   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:13.836835   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:13.984709   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:14.330344   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.429334   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:14.880342   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:14.983598   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:15.381632   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.484621   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:15.879619   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:15.980617   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:16.337505   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.430939   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:16.836526   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:16.934076   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:17.328718   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.430258   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:17.837525   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:17.935840   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:18.330498   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.442817   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:18.834131   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:18.930424   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:19.335657   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.431814   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:19.881820   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:19.929640   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:20.376550   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.433313   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:20.839909   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:20.938673   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:21.328586   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.436787   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:21.826824   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:21.940150   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:22.333335   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.426308   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:22.835931   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:22.930521   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:23.337382   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:23.435127   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:24.121615   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.124046   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:24.328155   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.565529   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:24.824308   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:24.938360   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:25.329857   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.442538   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:25.828679   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:25.940937   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:26.334109   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.430717   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:26.825364   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:26.987415   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:27.379323   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.437335   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:27.828333   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:27.943344   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:28.334981   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.481968   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:28.826348   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:28.978931   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:29.326774   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.479441   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:29.828841   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:29.937724   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:30.331618   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.440821   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:30.833027   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:30.926788   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:31.334622   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.481380   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:31.839623   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:31.939218   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:32.327538   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.480450   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:32.830502   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:32.939967   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:33.340919   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.436041   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:33.823662   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:33.934340   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:34.339123   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.434959   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:34.837169   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:34.932278   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:35.381170   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.483050   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:35.880150   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:35.984966   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:36.400877   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.433623   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:36.826957   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:36.938320   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:37.334728   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.432900   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:37.838019   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:37.935175   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:38.390940   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.432712   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:38.877083   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:38.933844   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:39.338245   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.448086   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:39.848998   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:39.934200   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:40.381638   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.490431   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:40.880155   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:40.984159   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:41.335686   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.430298   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:41.830624   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:41.927021   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:42.329790   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.438700   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:42.833928   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:42.930932   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:43.328942   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.441966   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:43.834943   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:43.932939   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:44.330658   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.439679   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:44.828180   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:44.951300   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:45.333023   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.444145   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:45.876409   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:45.988455   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:46.337025   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:46.431606   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:46.892287   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.024152   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:47.338674   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.479091   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:47.844337   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:47.940948   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:48.330899   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.427867   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:48.837757   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:48.932757   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:49.328886   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:49.443887   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:49.838744   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:49.935701   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:50.393866   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.435063   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:50.828384   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:50.979243   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:51.381137   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.481200   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:51.835831   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:51.934070   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:52.339098   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.435163   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:52.836989   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:52.935607   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:53.339519   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:53.433912   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:53.836961   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:53.930543   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:54.326136   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:54.441143   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:54.837742   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:54.933730   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:55.325819   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:55.442837   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:55.875488   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:55.987875   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:56.337227   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:56.432940   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:56.837524   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:56.937724   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:57.338943   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:57.435910   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:57.827290   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:57.939150   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:58.327807   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:58.440410   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:58.842223   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:58.947897   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:59.334718   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:59.430732   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:27:59.836403   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:27:59.934065   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:00.328548   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:00.440382   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:00.836702   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:00.931667   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:01.336499   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:01.479511   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:01.835045   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:01.929431   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:02.325625   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:02.443717   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:02.846111   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:02.938371   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:03.330524   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:03.435268   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:03.838082   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:03.947201   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:04.327162   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:04.451816   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:04.837655   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:04.945958   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:05.336039   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:05.438017   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:05.876339   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:05.930559   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:06.337517   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:06.436249   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:06.828638   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:06.933157   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:07.325465   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:07.470731   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:07.830901   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:07.937229   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:08.334439   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:08.430748   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:08.838444   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:08.937011   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:09.327974   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:09.446667   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:09.830396   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:09.946093   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:10.325772   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:10.437275   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:10.832676   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:10.933071   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:11.330994   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:11.473094   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:11.879276   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:11.939451   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:12.348643   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:12.442387   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:12.844503   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:12.938141   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:13.376729   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:13.484421   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:13.841500   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:13.944462   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:14.328267   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:14.438457   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:14.829831   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:14.936444   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:15.384639   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:15.484080   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:15.831941   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:15.943410   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:16.334523   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:16.434424   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:16.837858   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:16.941252   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:17.349585   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:17.440053   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:17.830717   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:17.937671   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:18.330424   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:18.433957   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:18.836820   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:18.952396   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:19.341033   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:19.501964   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:19.876624   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:19.976732   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:20.338698   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:20.430350   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:20.850660   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:20.946307   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:21.335256   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:21.430738   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:21.832477   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:21.934615   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:22.326075   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:22.440813   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:22.826845   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:22.991454   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:23.326598   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:23.437905   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:23.832504   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:23.931974   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:24.327334   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:24.440054   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:24.838235   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:24.930247   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:25.327667   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:25.430155   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:25.845620   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:25.948216   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:26.346295   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:26.440474   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:26.834300   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:26.943022   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:27.326445   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:27.446878   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:27.835749   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:27.931457   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:28.331939   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:28.436386   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:28.837153   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:28.941233   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:29.331825   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:29.431029   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:29.886792   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:29.951424   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:30.330383   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:30.438872   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:30.825940   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:30.934452   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:31.337146   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:31.437754   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:31.824666   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:31.939591   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:32.331519   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:32.436660   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:32.841020   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:32.939071   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:33.337296   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:33.445464   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:33.831485   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:33.940083   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:34.333572   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:34.436416   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:34.873839   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:34.937892   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:35.336564   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:35.440491   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:35.826582   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:35.978579   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:36.371046   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:36.434741   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 00:28:36.831417   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:36.942288   15320 kapi.go:107] duration metric: took 3m15.0255282s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 00:28:37.347061   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:37.826362   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:38.327515   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:38.844422   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:39.329999   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:39.838418   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:40.327418   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:40.834734   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:41.335239   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:41.835052   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:42.343856   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:42.839529   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:43.329568   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:43.825475   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:44.337382   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:44.828551   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:45.333723   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:45.825446   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:46.330599   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:46.826722   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:47.328452   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:47.825407   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:48.325158   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:48.826706   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:49.334533   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:49.835384   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:50.329575   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:50.835317   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:51.334324   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:51.836874   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:52.340422   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:52.835376   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:53.339283   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:53.835905   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:54.341004   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:54.832104   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:55.338761   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:55.835564   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:56.336459   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:56.826063   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:57.327495   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:57.825472   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:58.339477   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:58.833674   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:59.337842   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:28:59.838580   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:00.336852   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:00.831584   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:01.328237   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:01.840362   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:02.328354   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:02.837414   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:03.329129   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:03.840456   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:04.330096   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:04.872677   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:05.352794   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:05.837784   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:06.336217   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:06.838323   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:07.349706   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:07.838304   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:08.330886   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:08.834682   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:09.339536   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:09.826335   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:10.333324   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:10.833316   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:11.334330   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:11.839191   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:12.329835   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:12.828848   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:13.326330   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:13.826248   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:14.325591   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:14.842226   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:15.331075   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:15.838883   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:16.333145   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:16.837432   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:17.327859   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:17.833068   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:18.335929   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:18.829690   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:19.340666   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:19.832977   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:20.340664   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:20.831635   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:21.338613   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:21.834152   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:22.332006   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:22.870542   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:23.331227   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:23.832492   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:24.331772   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:24.834623   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:25.333607   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:25.871854   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:26.372023   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:26.837949   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:27.340751   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:27.831924   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:28.379663   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:28.832052   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:29.474308   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:29.978216   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:30.470943   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:30.875936   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:31.371983   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:31.870179   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:32.373411   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:32.875417   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:33.373035   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:33.870175   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:34.373074   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:34.872059   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:35.372381   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:35.839965   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:36.372646   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:36.836041   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:37.369513   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:37.870239   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:38.328839   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:38.839019   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:39.342228   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:39.834528   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:40.333188   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:40.836597   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:41.339500   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:41.844693   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:42.336743   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:42.838689   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:43.336915   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:43.867932   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:44.338238   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:44.837962   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:45.340374   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:45.829706   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:46.325231   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:46.837299   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:47.340250   15320 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 00:29:47.829963   15320 kapi.go:107] duration metric: took 4m28.5134514s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 00:30:54.533440   15320 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 00:30:54.533544   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:55.045726   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:55.531385   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:56.046242   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:56.547590   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:57.068686   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:57.542520   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:58.037457   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:58.538715   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:59.061545   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:30:59.536694   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:31:00.039379   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:31:00.543195   15320 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 00:31:01.035882   15320 kapi.go:107] duration metric: took 5m34.5144398s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 00:31:01.039874   15320 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-285600 cluster.
	I0717 00:31:01.048227   15320 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 00:31:01.055144   15320 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 00:31:01.058399   15320 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, volcano, nvidia-device-plugin, storage-provisioner, metrics-server, helm-tiller, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0717 00:31:01.062796   15320 addons.go:510] duration metric: took 6m13.2486506s for enable addons: enabled=[cloud-spanner ingress-dns volcano nvidia-device-plugin storage-provisioner metrics-server helm-tiller inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0717 00:31:01.062942   15320 start.go:246] waiting for cluster config update ...
	I0717 00:31:01.063025   15320 start.go:255] writing updated cluster config ...
	I0717 00:31:01.072222   15320 ssh_runner.go:195] Run: rm -f paused
	I0717 00:31:01.344755   15320 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:31:01.348827   15320 out.go:177] * Done! kubectl is now configured to use "addons-285600" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 00:32:32 addons-285600 dockerd[1357]: time="2024-07-17T00:32:32.206558496Z" level=info msg="ignoring event" container=3638dece0e625f028cb9b28d60142a7c06da70a235a40d8acfab0e5680a37d35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:32 addons-285600 dockerd[1357]: time="2024-07-17T00:32:32.492124450Z" level=info msg="ignoring event" container=fd97d5c28ede9d82b86794a5b1babaecc8ca3fec8cb2c245c5a1538542109e11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.374559761Z" level=info msg="ignoring event" container=11bef097ac39e479c21ca8eccd97e793fd0f5639a1ebb0fefaac1385284fa1d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.568993725Z" level=info msg="ignoring event" container=da8bd9da0a4821fca54a678f99bf7834ce1bd1857c0545267c341c914ac16eb3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.654350094Z" level=info msg="ignoring event" container=34168158f84e7871b566d29d069ad7933522bdc412a78fe2e415832549dccb95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.654427900Z" level=info msg="ignoring event" container=35604519e6032c35ccfce9aa5b65050f0e86c7847b471b9d7cc4ae4f3d7e6583 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.654473903Z" level=info msg="ignoring event" container=1683171e8ab634dc7d3382f3bdb7e6c7a40e95fbc1456ae3f4eab5c396aea7e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.660930000Z" level=info msg="ignoring event" container=bf32f6b9e50e03834c1ff403942e50611c7f3a15392372ab6fafc819bb262089 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:36 addons-285600 dockerd[1357]: time="2024-07-17T00:32:36.754936435Z" level=info msg="ignoring event" container=f7d0033497bcf82aa14f6f9bb77d91f3c2cc3c1e3e7632bdfaf86afd63b374bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:37 addons-285600 dockerd[1357]: time="2024-07-17T00:32:37.352938157Z" level=info msg="ignoring event" container=4be21176c97b470f295382773a2b4628f3ec5ead8863f6c759dc25256cd32512 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:37 addons-285600 cri-dockerd[1630]: time="2024-07-17T00:32:37Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"csi-hostpath-attacher-0_kube-system\": unexpected command output nsenter: cannot open /proc/6234/ns/net: No such file or directory\n with error: exit status 1"
	Jul 17 00:32:37 addons-285600 dockerd[1357]: time="2024-07-17T00:32:37.373193616Z" level=info msg="ignoring event" container=05ecaffca4a48713d208e2bd37d8742c90db77af77d6a416a3fbe34eb19975b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:37 addons-285600 dockerd[1357]: time="2024-07-17T00:32:37.779800409Z" level=info msg="ignoring event" container=ba728840ba086b884b8042af7a12836035391fcc49e3eb4c3c4c9119d9e8e071 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:37 addons-285600 dockerd[1357]: time="2024-07-17T00:32:37.952523502Z" level=info msg="ignoring event" container=3939b48efe94a44bcc6b160af40b3d3a7d93ffec470ac4bd19f5c7b2de0750dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:39 addons-285600 dockerd[1357]: time="2024-07-17T00:32:39.154996944Z" level=info msg="Container failed to exit within 30s of signal 3 - using the force" container=dae4e5c9f182a0a01ae748b5f74eef2aa28b69668e1f394f233e5f51538b1e88
	Jul 17 00:32:39 addons-285600 dockerd[1357]: time="2024-07-17T00:32:39.208579568Z" level=info msg="ignoring event" container=dae4e5c9f182a0a01ae748b5f74eef2aa28b69668e1f394f233e5f51538b1e88 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:39 addons-285600 cri-dockerd[1630]: time="2024-07-17T00:32:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"test-job-nginx-0_my-volcano\": unexpected command output nsenter: cannot open /proc/13439/ns/net: No such file or directory\n with error: exit status 1"
	Jul 17 00:32:39 addons-285600 dockerd[1357]: time="2024-07-17T00:32:39.570654734Z" level=info msg="ignoring event" container=151974e95d1459d78f455aba6bf224f0326674d324601841be9f4ae1c190e742 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:45 addons-285600 dockerd[1357]: time="2024-07-17T00:32:45.159157739Z" level=info msg="ignoring event" container=77d64336069e56ce726579c74683d8d02659e33a7b9e3540431f92f0c18cc898 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:45 addons-285600 dockerd[1357]: time="2024-07-17T00:32:45.167255037Z" level=info msg="ignoring event" container=27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:45 addons-285600 dockerd[1357]: time="2024-07-17T00:32:45.564684986Z" level=info msg="ignoring event" container=a62ee99fad3b78ea1e33d11a8cb9eb58c35f716a6a7eee57ff57ba1709295224 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:45 addons-285600 dockerd[1357]: time="2024-07-17T00:32:45.615334227Z" level=info msg="ignoring event" container=0473feab614ce8afcbd602574cddb08e842e145aeedbcd1006c5e0caa703cfe1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:49 addons-285600 dockerd[1357]: time="2024-07-17T00:32:49.495928921Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=dc5e313e09ba634bc6daed74a00996c067d9293fbed4a2f16935efc878c23aec
	Jul 17 00:32:49 addons-285600 dockerd[1357]: time="2024-07-17T00:32:49.549780398Z" level=info msg="ignoring event" container=dc5e313e09ba634bc6daed74a00996c067d9293fbed4a2f16935efc878c23aec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:32:49 addons-285600 dockerd[1357]: time="2024-07-17T00:32:49.869824132Z" level=info msg="ignoring event" container=4a106b443c79058014987a40cfb58bab18a5736d38d2a994a4f2393923675cb4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	19afa2c84682c       ghcr.io/headlamp-k8s/headlamp@sha256:1c3f42aacd8eee1d3f1c63efb5a3b42da387ca1d87b77b0f486e8443201fcb37                        8 minutes ago       Running             headlamp                  0                   70b77e143628e       headlamp-7867546754-wpq7n
	66455b0fb868d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   cb0e151535e3d       gcp-auth-5db96cd9b4-rvkdj
	95a89f7d433e0       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e             10 minutes ago      Running             controller                0                   2e746db5a4916       ingress-nginx-controller-768f948f8f-87948
	2cc6a33954cdf       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   12 minutes ago      Exited              patch                     0                   5400756b2198c       ingress-nginx-admission-patch-5z4fc
	347948f6a083e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   12 minutes ago      Exited              create                    0                   2af3d3f0c599d       ingress-nginx-admission-create-dg8n4
	dd3e40f17dcb9       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                      0                   0c0a5a27c80a7       yakd-dashboard-799879c74f-zjfzg
	2c9e2f63317b0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              13 minutes ago      Running             registry-proxy            0                   09b04bee80b56       registry-proxy-zgk9n
	0188b552c9c09       registry@sha256:79b29591e1601a73f03fcd413e655b72b9abfae5a23f1ad2e883d4942fbb4351                                             13 minutes ago      Running             registry                  0                   36c28c1d43f54       registry-cmvrv
	091f8939b4667       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             13 minutes ago      Running             minikube-ingress-dns      0                   5bcf56744b27d       kube-ingress-dns-minikube
	ee774444733fa       6e38f40d628db                                                                                                                15 minutes ago      Running             storage-provisioner       0                   826582989c467       storage-provisioner
	4c518a47adfb9       cbb01a7bd410d                                                                                                                15 minutes ago      Running             coredns                   0                   f42fbf175781b       coredns-7db6d8ff4d-nf699
	15fc6a460ba3e       53c535741fb44                                                                                                                15 minutes ago      Running             kube-proxy                0                   2917167c7a5c7       kube-proxy-52kxk
	b8f155644f3d6       e874818b3caac                                                                                                                15 minutes ago      Running             kube-controller-manager   0                   ebd92e540ad43       kube-controller-manager-addons-285600
	ae1ce42c8a724       3861cfcd7c04c                                                                                                                15 minutes ago      Running             etcd                      0                   67068eddd99ff       etcd-addons-285600
	6ea46f4308464       7820c83aa1394                                                                                                                15 minutes ago      Running             kube-scheduler            0                   bb0db5a9a5082       kube-scheduler-addons-285600
	504a7933b7fd7       56ce0fd9fb532                                                                                                                15 minutes ago      Running             kube-apiserver            0                   2024e0b8329be       kube-apiserver-addons-285600
	
	
	==> controller_ingress [95a89f7d433e] <==
	I0717 00:29:48.529816       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0717 00:29:48.545453       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0717 00:29:48.545537       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-87948"
	I0717 00:29:48.550785       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-87948" node="addons-285600"
	I0717 00:29:48.571350       7 controller.go:210] "Backend successfully reloaded"
	I0717 00:29:48.571585       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0717 00:29:48.571737       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-87948", UID:"7a444945-704d-48b0-a754-9096b269c241", APIVersion:"v1", ResourceVersion:"832", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0717 00:32:06.063137       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0717 00:32:06.179297       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.117s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.117s testedConfigurationSize:18.1kB}
	I0717 00:32:06.179437       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0717 00:32:06.256497       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0717 00:32:06.257985       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"0afb6b37-56f5-4dbd-9618-b42994b094e9", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2110", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0717 00:32:07.754016       7 controller.go:1107] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0717 00:32:07.754797       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0717 00:32:08.172004       7 controller.go:210] "Backend successfully reloaded"
	I0717 00:32:08.173034       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-87948", UID:"7a444945-704d-48b0-a754-9096b269c241", APIVersion:"v1", ResourceVersion:"832", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0717 00:32:11.071143       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0717 00:32:11.071411       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0717 00:32:11.472596       7 controller.go:210] "Backend successfully reloaded"
	I0717 00:32:11.473535       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-87948", UID:"7a444945-704d-48b0-a754-9096b269c241", APIVersion:"v1", ResourceVersion:"832", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0717 00:32:36.217814       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	W0717 00:32:39.552377       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	I0717 00:32:48.548766       7 status.go:304] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I0717 00:32:48.567971       7 event.go:364] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"0afb6b37-56f5-4dbd-9618-b42994b094e9", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2491", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0717 00:32:48.568394       7 controller.go:1213] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [4c518a47adfb] <==
	[INFO] 10.244.0.9:44966 - 54155 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000689049s
	[INFO] 10.244.0.9:35080 - 51972 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000262718s
	[INFO] 10.244.0.9:35080 - 58889 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014621s
	[INFO] 10.244.0.9:34529 - 46170 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000352325s
	[INFO] 10.244.0.9:34529 - 20831 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000899663s
	[INFO] 10.244.0.9:45899 - 58519 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000397228s
	[INFO] 10.244.0.9:45899 - 912 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000656346s
	[INFO] 10.244.0.9:45851 - 23458 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000154611s
	[INFO] 10.244.0.9:45851 - 58023 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000293921s
	[INFO] 10.244.0.9:52803 - 41665 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000369326s
	[INFO] 10.244.0.9:52803 - 34525 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000281319s
	[INFO] 10.244.0.9:34309 - 48768 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000243917s
	[INFO] 10.244.0.9:34309 - 53382 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000271119s
	[INFO] 10.244.0.9:40525 - 65236 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000172912s
	[INFO] 10.244.0.9:40525 - 41170 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000201114s
	[INFO] 10.244.0.26:56257 - 2550 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000541441s
	[INFO] 10.244.0.26:52396 - 10737 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00078506s
	[INFO] 10.244.0.26:59766 - 8073 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158812s
	[INFO] 10.244.0.26:50904 - 64235 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000380229s
	[INFO] 10.244.0.26:35983 - 33921 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000197115s
	[INFO] 10.244.0.26:36008 - 41566 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205416s
	[INFO] 10.244.0.26:35754 - 64266 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.014926544s
	[INFO] 10.244.0.26:55331 - 14457 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.015501688s
	[INFO] 10.244.0.29:34438 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000350627s
	[INFO] 10.244.0.29:59463 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000283122s
	
	
	==> describe nodes <==
	Name:               addons-285600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-285600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=addons-285600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_24_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-285600
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:24:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-285600
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:40:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:37:45 +0000   Wed, 17 Jul 2024 00:24:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:37:45 +0000   Wed, 17 Jul 2024 00:24:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:37:45 +0000   Wed, 17 Jul 2024 00:24:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:37:45 +0000   Wed, 17 Jul 2024 00:24:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-285600
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f77aaa479284f84b7d5b16f7c24bd6e
	  System UUID:                3f77aaa479284f84b7d5b16f7c24bd6e
	  Boot ID:                    c8c682c7-038f-4949-bfeb-6c51c261a4de
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-5db96cd9b4-rvkdj                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  headlamp                    headlamp-7867546754-wpq7n                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-87948    100m (0%!)(MISSING)     0 (0%!)(MISSING)      90Mi (0%!)(MISSING)        0 (0%!)(MISSING)         15m
	  kube-system                 coredns-7db6d8ff4d-nf699                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-addons-285600                           100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-addons-285600                 250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-addons-285600        200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-52kxk                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-addons-285600                 100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 registry-cmvrv                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 registry-proxy-zgk9n                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  yakd-dashboard              yakd-dashboard-799879c74f-zjfzg              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (0%!)(MISSING)       256Mi (0%!)(MISSING)     15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (1%!)(MISSING)  426Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-285600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-285600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-285600 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-285600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-285600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-285600 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node addons-285600 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node addons-285600 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-285600 event: Registered Node addons-285600 in Controller
	
	
	==> dmesg <==
	[  +0.001058] FS-Cache: O-cookie c=00000006 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001229] FS-Cache: O-cookie d=00000000d644147d{9P.session} n=000000006201c53c
	[  +0.001174] FS-Cache: O-key=[10] '34323934393337343735'
	[  +0.000817] FS-Cache: N-cookie c=00000007 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001103] FS-Cache: N-cookie d=00000000d644147d{9P.session} n=0000000023399480
	[  +0.001450] FS-Cache: N-key=[10] '34323934393337343735'
	[  +0.941940] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000005]  failed 2
	[  +0.058380] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.579542] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.324531] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002125] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003033] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003674] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.006706] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001818] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004345] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002202] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.011260] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.193778] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.644798] netlink: 'init': attribute type 4 has an invalid length.
	
	
	==> etcd [ae1ce42c8a72] <==
	{"level":"warn","ts":"2024-07-17T00:32:14.166835Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.411532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-07-17T00:32:14.166866Z","caller":"traceutil/trace.go:171","msg":"trace[994312738] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:2239; }","duration":"112.480737ms","start":"2024-07-17T00:32:14.054375Z","end":"2024-07-17T00:32:14.166856Z","steps":["trace[994312738] 'agreement among raft nodes before linearized reading'  (duration: 112.39003ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.777396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.436553ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573542383521 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-285600\" mod_revision:2342 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-285600\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-285600\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:32:36.777606Z","caller":"traceutil/trace.go:171","msg":"trace[1261414075] linearizableReadLoop","detail":"{readStateIndex:2558; appliedIndex:2557; }","duration":"122.693143ms","start":"2024-07-17T00:32:36.654877Z","end":"2024-07-17T00:32:36.77757Z","steps":["trace[1261414075] 'read index received'  (duration: 29.303µs)","trace[1261414075] 'applied index is now lower than readState.Index'  (duration: 122.66294ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:32:36.777823Z","caller":"traceutil/trace.go:171","msg":"trace[1600168461] transaction","detail":"{read_only:false; response_revision:2426; number_of_response:1; }","duration":"223.063767ms","start":"2024-07-17T00:32:36.554749Z","end":"2024-07-17T00:32:36.777813Z","steps":["trace[1600168461] 'process raft request'  (duration: 97.095773ms)","trace[1600168461] 'compare'  (duration: 125.337446ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:32:36.778201Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.318791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54\" ","response":"range_response_count:1 size:3027"}
	{"level":"info","ts":"2024-07-17T00:32:36.7783Z","caller":"traceutil/trace.go:171","msg":"trace[1101092261] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54; range_end:; response_count:1; response_revision:2426; }","duration":"123.447401ms","start":"2024-07-17T00:32:36.654841Z","end":"2024-07-17T00:32:36.778289Z","steps":["trace[1101092261] 'agreement among raft nodes before linearized reading'  (duration: 123.261987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.778503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.56591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:3935"}
	{"level":"info","ts":"2024-07-17T00:32:36.778524Z","caller":"traceutil/trace.go:171","msg":"trace[504443253] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:2426; }","duration":"123.611614ms","start":"2024-07-17T00:32:36.654906Z","end":"2024-07-17T00:32:36.778518Z","steps":["trace[504443253] 'agreement among raft nodes before linearized reading'  (duration: 123.545309ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.778793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.414121ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/csi-hostpath-sc\" ","response":"range_response_count:1 size:898"}
	{"level":"info","ts":"2024-07-17T00:32:36.778816Z","caller":"traceutil/trace.go:171","msg":"trace[1909900961] range","detail":"{range_begin:/registry/storageclasses/csi-hostpath-sc; range_end:; response_count:1; response_revision:2426; }","duration":"122.457524ms","start":"2024-07-17T00:32:36.656352Z","end":"2024-07-17T00:32:36.77881Z","steps":["trace[1909900961] 'agreement among raft nodes before linearized reading'  (duration: 122.311413ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:32:36.971106Z","caller":"traceutil/trace.go:171","msg":"trace[112906004] linearizableReadLoop","detail":"{readStateIndex:2559; appliedIndex:2558; }","duration":"117.574749ms","start":"2024-07-17T00:32:36.853502Z","end":"2024-07-17T00:32:36.971077Z","steps":["trace[112906004] 'read index received'  (duration: 98.577287ms)","trace[112906004] 'applied index is now lower than readState.Index'  (duration: 18.996862ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:32:36.971234Z","caller":"traceutil/trace.go:171","msg":"trace[999850595] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2427; }","duration":"118.880949ms","start":"2024-07-17T00:32:36.852335Z","end":"2024-07-17T00:32:36.971216Z","steps":["trace[999850595] 'process raft request'  (duration: 99.778079ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.971391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.84987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-07-17T00:32:36.971452Z","caller":"traceutil/trace.go:171","msg":"trace[580926290] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2427; }","duration":"117.940877ms","start":"2024-07-17T00:32:36.853496Z","end":"2024-07-17T00:32:36.971437Z","steps":["trace[580926290] 'agreement among raft nodes before linearized reading'  (duration: 117.775464ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.971495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.730683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54\" ","response":"range_response_count:1 size:3027"}
	{"level":"info","ts":"2024-07-17T00:32:36.971536Z","caller":"traceutil/trace.go:171","msg":"trace[1705141235] range","detail":"{range_begin:/registry/controllerrevisions/kube-system/csi-hostpath-resizer-dd9fcd54; range_end:; response_count:1; response_revision:2427; }","duration":"116.790688ms","start":"2024-07-17T00:32:36.85473Z","end":"2024-07-17T00:32:36.971521Z","steps":["trace[1705141235] 'agreement among raft nodes before linearized reading'  (duration: 116.664479ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:32:36.971563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.778888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpath-resizer-0\" ","response":"range_response_count:1 size:3935"}
	{"level":"info","ts":"2024-07-17T00:32:36.971707Z","caller":"traceutil/trace.go:171","msg":"trace[491959069] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpath-resizer-0; range_end:; response_count:1; response_revision:2427; }","duration":"116.944901ms","start":"2024-07-17T00:32:36.854749Z","end":"2024-07-17T00:32:36.971693Z","steps":["trace[491959069] 'agreement among raft nodes before linearized reading'  (duration: 116.720783ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:34:28.580178Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1519}
	{"level":"info","ts":"2024-07-17T00:34:28.63808Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1519,"took":"57.113907ms","hash":4015451722,"current-db-size-bytes":11575296,"current-db-size":"12 MB","current-db-size-in-use-bytes":7446528,"current-db-size-in-use":"7.4 MB"}
	{"level":"info","ts":"2024-07-17T00:34:28.638185Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4015451722,"revision":1519,"compact-revision":-1}
	{"level":"info","ts":"2024-07-17T00:39:28.576119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2594}
	{"level":"info","ts":"2024-07-17T00:39:28.622291Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2594,"took":"45.434819ms","hash":2141411913,"current-db-size-bytes":11575296,"current-db-size":"12 MB","current-db-size-in-use-bytes":5189632,"current-db-size-in-use":"5.2 MB"}
	{"level":"info","ts":"2024-07-17T00:39:28.622395Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2141411913,"revision":2594,"compact-revision":1519}
	
	
	==> gcp-auth [66455b0fb868] <==
	2024/07/17 00:31:03 Ready to write response ...
	2024/07/17 00:31:03 Ready to marshal response ...
	2024/07/17 00:31:03 Ready to write response ...
	2024/07/17 00:31:03 Ready to marshal response ...
	2024/07/17 00:31:03 Ready to write response ...
	2024/07/17 00:31:07 Ready to marshal response ...
	2024/07/17 00:31:07 Ready to write response ...
	2024/07/17 00:31:12 Ready to marshal response ...
	2024/07/17 00:31:12 Ready to write response ...
	2024/07/17 00:31:27 Ready to marshal response ...
	2024/07/17 00:31:27 Ready to write response ...
	2024/07/17 00:31:29 Ready to marshal response ...
	2024/07/17 00:31:29 Ready to write response ...
	2024/07/17 00:31:32 Ready to marshal response ...
	2024/07/17 00:31:32 Ready to write response ...
	2024/07/17 00:31:33 Ready to marshal response ...
	2024/07/17 00:31:33 Ready to write response ...
	2024/07/17 00:31:48 Ready to marshal response ...
	2024/07/17 00:31:48 Ready to write response ...
	2024/07/17 00:32:07 Ready to marshal response ...
	2024/07/17 00:32:07 Ready to write response ...
	2024/07/17 00:32:17 Ready to marshal response ...
	2024/07/17 00:32:17 Ready to write response ...
	2024/07/17 00:32:22 Ready to marshal response ...
	2024/07/17 00:32:22 Ready to write response ...
	
	
	==> kernel <==
	 00:40:13 up  2:26,  0 users,  load average: 0.15, 0.64, 0.87
	Linux addons-285600 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [504a7933b7fd] <==
	Trace[1660793770]: ["GuaranteedUpdate etcd3" audit-id:4aef6b03-abae-4b4a-b315-0156a63279fc,key:/apiextensions.k8s.io/customresourcedefinitions/jobtemplates.flow.volcano.sh,type:*apiextensions.CustomResourceDefinition,resource:customresourcedefinitions.apiextensions.k8s.io 798ms (00:32:12.565)
	Trace[1660793770]:  ---"About to Encode" 596ms (00:32:13.172)
	Trace[1660793770]:  ---"Txn call completed" 93ms (00:32:13.272)
	Trace[1660793770]:  ---"decode succeeded" len:123044 88ms (00:32:13.360)]
	Trace[1660793770]: [808.332074ms] [808.332074ms] END
	I0717 00:32:13.373309       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0717 00:32:14.481860       1 cacher.go:168] Terminating all watchers from cacher jobflows.flow.volcano.sh
	I0717 00:32:14.575866       1 trace.go:236] Trace[405056093]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5c4957ff-42cd-447c-8a9c-b1983660c02f,client:192.168.49.2,api-group:events.k8s.io,api-version:v1,name:,subresource:,namespace:volcano-system,protocol:HTTP/2.0,resource:events,scope:namespace,url:/apis/events.k8s.io/v1/namespaces/volcano-system/events,user-agent:kube-controller-manager/v1.30.2 (linux/amd64) kubernetes/3968350/system:serviceaccount:kube-system:namespace-controller,verb:DELETE (17-Jul-2024 00:32:10.958) (total time: 3617ms):
	Trace[405056093]: ---"About to write a response" 3614ms (00:32:14.575)
	Trace[405056093]: [3.617696148s] [3.617696148s] END
	W0717 00:32:15.258691       1 cacher.go:168] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0717 00:32:35.304590       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0717 00:32:42.503058       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0717 00:32:44.585037       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:32:44.585236       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:32:44.622799       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:32:44.623176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:32:44.623305       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:32:44.779025       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:32:44.779186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 00:32:44.902022       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 00:32:44.902145       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 00:32:45.629315       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 00:32:45.902983       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 00:32:45.914959       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [b8f155644f3d] <==
	E0717 00:39:20.563483       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:21.176202       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:21.176336       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:27.350192       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:27.350301       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:28.735458       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:28.735571       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:30.663156       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:30.663341       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:40.407490       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:40.407648       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:50.280430       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:50.280547       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:55.582178       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:55.582211       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:56.219651       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:56.219782       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:58.037520       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:58.037845       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:39:59.779784       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:39:59.779925       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:40:02.929569       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:40:02.929670       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 00:40:04.925796       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 00:40:04.925919       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [15fc6a460ba3] <==
	I0717 00:25:01.290151       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:25:01.680085       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:25:02.183558       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:25:02.183688       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:25:02.192601       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:25:02.192877       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:25:02.192961       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:25:02.193750       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:25:02.194379       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:25:02.196458       1 config.go:192] "Starting service config controller"
	I0717 00:25:02.196474       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:25:02.196498       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:25:02.196502       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:25:02.197307       1 config.go:319] "Starting node config controller"
	I0717 00:25:02.197315       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:25:02.379772       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:25:02.379850       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:25:02.379902       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6ea46f430846] <==
	W0717 00:24:32.504333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:24:32.504477       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:24:32.537863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:24:32.537976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 00:24:32.548636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:24:32.548782       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:24:32.573466       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:24:32.573567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:24:32.585474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 00:24:32.585588       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 00:24:32.682834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:24:32.683033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:24:32.787596       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:24:32.787808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:24:32.805868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:24:32.806059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:24:32.849653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:24:32.849761       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:24:32.851464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 00:24:32.851566       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 00:24:32.887984       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 00:24:32.888082       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 00:24:32.897425       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:24:32.897559       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 00:24:35.104529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:32:45 addons-285600 kubelet[2654]: E0717 00:32:45.911981    2654 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105" containerID="27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105"
	Jul 17 00:32:45 addons-285600 kubelet[2654]: I0717 00:32:45.912088    2654 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105"} err="failed to get container status \"27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105\": rpc error: code = Unknown desc = Error response from daemon: No such container: 27233d50e2a47d07531049dce6241b5504c4b731f3b3f72adc73e2a3c75c1105"
	Jul 17 00:32:45 addons-285600 kubelet[2654]: I0717 00:32:45.975899    2654 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-ll9kh\" (UniqueName: \"kubernetes.io/projected/6dc2641a-8d4d-4b21-8bca-55e71e9d8056-kube-api-access-ll9kh\") on node \"addons-285600\" DevicePath \"\""
	Jul 17 00:32:46 addons-285600 kubelet[2654]: I0717 00:32:46.974200    2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5289f77b-eaea-4663-9358-ac94bb6ab671" path="/var/lib/kubelet/pods/5289f77b-eaea-4663-9358-ac94bb6ab671/volumes"
	Jul 17 00:32:46 addons-285600 kubelet[2654]: I0717 00:32:46.974879    2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc2641a-8d4d-4b21-8bca-55e71e9d8056" path="/var/lib/kubelet/pods/6dc2641a-8d4d-4b21-8bca-55e71e9d8056/volumes"
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.109696    2654 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nf94\" (UniqueName: \"kubernetes.io/projected/98f51f66-d89b-480b-b063-837b4b1f7ab7-kube-api-access-6nf94\") pod \"98f51f66-d89b-480b-b063-837b4b1f7ab7\" (UID: \"98f51f66-d89b-480b-b063-837b4b1f7ab7\") "
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.109875    2654 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98f51f66-d89b-480b-b063-837b4b1f7ab7-config-volume\") pod \"98f51f66-d89b-480b-b063-837b4b1f7ab7\" (UID: \"98f51f66-d89b-480b-b063-837b4b1f7ab7\") "
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.110551    2654 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98f51f66-d89b-480b-b063-837b4b1f7ab7-config-volume" (OuterVolumeSpecName: "config-volume") pod "98f51f66-d89b-480b-b063-837b4b1f7ab7" (UID: "98f51f66-d89b-480b-b063-837b4b1f7ab7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.114859    2654 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f51f66-d89b-480b-b063-837b4b1f7ab7-kube-api-access-6nf94" (OuterVolumeSpecName: "kube-api-access-6nf94") pod "98f51f66-d89b-480b-b063-837b4b1f7ab7" (UID: "98f51f66-d89b-480b-b063-837b4b1f7ab7"). InnerVolumeSpecName "kube-api-access-6nf94". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.211128    2654 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6nf94\" (UniqueName: \"kubernetes.io/projected/98f51f66-d89b-480b-b063-837b4b1f7ab7-kube-api-access-6nf94\") on node \"addons-285600\" DevicePath \"\""
	Jul 17 00:32:50 addons-285600 kubelet[2654]: I0717 00:32:50.211244    2654 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98f51f66-d89b-480b-b063-837b4b1f7ab7-config-volume\") on node \"addons-285600\" DevicePath \"\""
	Jul 17 00:32:51 addons-285600 kubelet[2654]: I0717 00:32:51.017270    2654 scope.go:117] "RemoveContainer" containerID="dc5e313e09ba634bc6daed74a00996c067d9293fbed4a2f16935efc878c23aec"
	Jul 17 00:32:52 addons-285600 kubelet[2654]: I0717 00:32:52.972701    2654 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f51f66-d89b-480b-b063-837b4b1f7ab7" path="/var/lib/kubelet/pods/98f51f66-d89b-480b-b063-837b4b1f7ab7/volumes"
	Jul 17 00:33:16 addons-285600 kubelet[2654]: I0717 00:33:16.959224    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:33:28 addons-285600 kubelet[2654]: I0717 00:33:28.957130    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:34:36 addons-285600 kubelet[2654]: I0717 00:34:36.954764    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:34:53 addons-285600 kubelet[2654]: I0717 00:34:53.950286    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:35:48 addons-285600 kubelet[2654]: I0717 00:35:48.946757    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:36:22 addons-285600 kubelet[2654]: I0717 00:36:22.942714    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:37:05 addons-285600 kubelet[2654]: I0717 00:37:05.941057    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:37:25 addons-285600 kubelet[2654]: I0717 00:37:25.938403    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:38:21 addons-285600 kubelet[2654]: I0717 00:38:21.933030    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:38:32 addons-285600 kubelet[2654]: I0717 00:38:32.933936    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:39:35 addons-285600 kubelet[2654]: I0717 00:39:35.929520    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-cmvrv" secret="" err="secret \"gcp-auth\" not found"
	Jul 17 00:39:58 addons-285600 kubelet[2654]: I0717 00:39:58.926375    2654 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-zgk9n" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [ee774444733f] <==
	I0717 00:25:12.081929       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:25:12.180634       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:25:12.180799       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:25:12.481592       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:25:12.481884       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-285600_ba369b7f-836a-49f0-a3c8-c4e2761948a0!
	I0717 00:25:12.482768       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"965f584d-612a-4654-9d96-37bc5d0053d4", APIVersion:"v1", ResourceVersion:"780", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-285600_ba369b7f-836a-49f0-a3c8-c4e2761948a0 became leader
	I0717 00:25:12.582449       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-285600_ba369b7f-836a-49f0-a3c8-c4e2761948a0!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:40:11.104908   11184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-285600 -n addons-285600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-285600 -n addons-285600: (1.3727975s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-285600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-dg8n4 ingress-nginx-admission-patch-5z4fc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-285600 describe pod ingress-nginx-admission-create-dg8n4 ingress-nginx-admission-patch-5z4fc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-285600 describe pod ingress-nginx-admission-create-dg8n4 ingress-nginx-admission-patch-5z4fc: exit status 1 (164.6091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dg8n4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5z4fc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-285600 describe pod ingress-nginx-admission-create-dg8n4 ingress-nginx-admission-patch-5z4fc: exit status 1
--- FAIL: TestAddons/parallel/Ingress (491.79s)

                                                
                                    
x
+
TestErrorSpam/setup (71.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-480800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 --driver=docker
E0717 00:41:01.432126    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.447027    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.462327    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.493465    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.539482    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.632632    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:01.804314    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:02.126678    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:02.783030    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:04.068128    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:06.629200    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:11.753349    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:22.006147    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:41:42.495884    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-480800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 --driver=docker: (1m11.1503093s)
error_spam_test.go:96: unexpected stderr: "W0717 00:40:59.820050   14688 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-480800] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=19264
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-480800" primary control-plane node in "nospam-480800" cluster
* Pulling base image v0.0.44-1721146479-19264 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-480800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0717 00:40:59.820050   14688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (71.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (6.99s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-965000
helpers_test.go:235: (dbg) docker inspect functional-965000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432",
	        "Created": "2024-07-17T00:43:26.641445549Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 26889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T00:43:27.246925758Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b90fcd82d9a0f97666ccbedd0bec36ffa6ae451ed5f5fff480c00361af0818c6",
	        "ResolvConfPath": "/var/lib/docker/containers/fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432/hostname",
	        "HostsPath": "/var/lib/docker/containers/fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432/hosts",
	        "LogPath": "/var/lib/docker/containers/fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432/fecf99533631b0a6c8abdbc46f6522703e750903035bcfa4b9059af4d4c81432-json.log",
	        "Name": "/functional-965000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-965000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-965000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6b189d649ef9137afbb4c2378d0713e6cc23eaba754dbfa261528170b434f39e-init/diff:/var/lib/docker/overlay2/6088a4728183ef5756e13b25ed8f3f4eadd6ab8d4c2088bd541d2084f39281eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b189d649ef9137afbb4c2378d0713e6cc23eaba754dbfa261528170b434f39e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b189d649ef9137afbb4c2378d0713e6cc23eaba754dbfa261528170b434f39e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b189d649ef9137afbb4c2378d0713e6cc23eaba754dbfa261528170b434f39e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-965000",
	                "Source": "/var/lib/docker/volumes/functional-965000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-965000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-965000",
	                "name.minikube.sigs.k8s.io": "functional-965000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4beea52c4eb563685e758bda52727995e75206303d633a10fd6b0f6026136ca2",
	            "SandboxKey": "/var/run/docker/netns/4beea52c4eb5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63086"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63087"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-965000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "db8e34bfa81fa177cd6096b6ff175f1aa1044117ce916114f57572fd0dd52cb1",
	                    "EndpointID": "2490c2773d00cb352c967239834d794fbc36cb6df9b6021d67b46b5a3a275e89",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-965000",
	                        "fecf99533631"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-965000 -n functional-965000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-965000 -n functional-965000: (1.4469703s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 logs -n 25: (2.9055526s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-480800 --log_dir                                     | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-480800                                            | nospam-480800     | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:42 UTC |
	| start   | -p functional-965000                                        | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:42 UTC | 17 Jul 24 00:44 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-965000                                        | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:44 UTC | 17 Jul 24 00:45 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache add                                 | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache add                                 | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache add                                 | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache add                                 | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | minikube-local-cache-test:functional-965000                 |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache delete                              | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | minikube-local-cache-test:functional-965000                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	| ssh     | functional-965000 ssh sudo                                  | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-965000                                           | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-965000 ssh                                       | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-965000 cache reload                              | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	| ssh     | functional-965000 ssh                                       | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-965000 kubectl --                                | functional-965000 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:45 UTC | 17 Jul 24 00:45 UTC |
	|         | --context functional-965000                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:44:28
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:44:28.252629     784 out.go:291] Setting OutFile to fd 776 ...
	I0717 00:44:28.253398     784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:44:28.253398     784 out.go:304] Setting ErrFile to fd 780...
	I0717 00:44:28.253398     784 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:44:28.278033     784 out.go:298] Setting JSON to false
	I0717 00:44:28.281562     784 start.go:129] hostinfo: {"hostname":"minikube3","uptime":9083,"bootTime":1721167984,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:44:28.281701     784 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:44:28.286184     784 out.go:177] * [functional-965000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:44:28.290252     784 notify.go:220] Checking for updates...
	I0717 00:44:28.292878     784 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:44:28.295295     784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:44:28.298096     784 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:44:28.300707     784 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:44:28.303035     784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:44:28.306415     784 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:44:28.307015     784 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:44:28.594579     784 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:44:28.604825     784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:44:28.972671     784 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:true NGoroutines:86 SystemTime:2024-07-17 00:44:28.913541279 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:44:28.976291     784 out.go:177] * Using the docker driver based on existing profile
	I0717 00:44:28.978947     784 start.go:297] selected driver: docker
	I0717 00:44:28.978998     784 start.go:901] validating driver "docker" against &{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:44:28.978998     784 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:44:28.993700     784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:44:29.347959     784 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:86 SystemTime:2024-07-17 00:44:29.309981114 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:44:29.455068     784 cni.go:84] Creating CNI manager for ""
	I0717 00:44:29.455068     784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:44:29.455068     784 start.go:340] cluster config:
	{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:44:29.460426     784 out.go:177] * Starting "functional-965000" primary control-plane node in "functional-965000" cluster
	I0717 00:44:29.463056     784 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 00:44:29.468757     784 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 00:44:29.471527     784 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:44:29.471527     784 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 00:44:29.471716     784 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 00:44:29.471716     784 cache.go:56] Caching tarball of preloaded images
	I0717 00:44:29.471716     784 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 00:44:29.472350     784 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 00:44:29.472389     784 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\config.json ...
	W0717 00:44:29.704686     784 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e is of wrong architecture
	I0717 00:44:29.704686     784 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:44:29.704788     784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:44:29.705050     784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:44:29.705148     784 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 00:44:29.705320     784 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 00:44:29.705382     784 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 00:44:29.705672     784 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 00:44:29.705761     784 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 00:44:29.705792     784 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:44:29.722493     784 image.go:273] response: 
	I0717 00:44:30.235775     784 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 00:44:30.235775     784 cache.go:194] Successfully downloaded all kic artifacts
	I0717 00:44:30.235775     784 start.go:360] acquireMachinesLock for functional-965000: {Name:mk9a9bc98bcdd44d4a0c34b7902ae925f7fed4a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 00:44:30.235775     784 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-965000"
	I0717 00:44:30.236978     784 start.go:96] Skipping create...Using existing machine configuration
	I0717 00:44:30.236978     784 fix.go:54] fixHost starting: 
	I0717 00:44:30.257416     784 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
	I0717 00:44:30.426213     784 fix.go:112] recreateIfNeeded on functional-965000: state=Running err=<nil>
	W0717 00:44:30.426213     784 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 00:44:30.429785     784 out.go:177] * Updating the running docker "functional-965000" container ...
	I0717 00:44:30.434009     784 machine.go:94] provisionDockerMachine start ...
	I0717 00:44:30.446084     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:30.647235     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:30.647946     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:30.647946     784 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 00:44:30.847268     784 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-965000
	
	I0717 00:44:30.847374     784 ubuntu.go:169] provisioning hostname "functional-965000"
	I0717 00:44:30.858354     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:31.050528     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:31.050868     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:31.050868     784 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-965000 && echo "functional-965000" | sudo tee /etc/hostname
	I0717 00:44:31.279288     784 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-965000
	
	I0717 00:44:31.292416     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:31.483846     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:31.484568     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:31.484568     784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-965000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-965000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-965000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 00:44:31.683268     784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:31.683268     784 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0717 00:44:31.683268     784 ubuntu.go:177] setting up certificates
	I0717 00:44:31.683268     784 provision.go:84] configureAuth start
	I0717 00:44:31.698455     784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-965000
	I0717 00:44:31.894305     784 provision.go:143] copyHostCerts
	I0717 00:44:31.894305     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0717 00:44:31.895140     784 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0717 00:44:31.895221     784 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0717 00:44:31.896141     784 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0717 00:44:31.897411     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0717 00:44:31.898015     784 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0717 00:44:31.898015     784 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0717 00:44:31.898015     784 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0717 00:44:31.899481     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0717 00:44:31.899481     784 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0717 00:44:31.899481     784 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0717 00:44:31.900328     784 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0717 00:44:31.901136     784 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-965000 san=[127.0.0.1 192.168.49.2 functional-965000 localhost minikube]
	I0717 00:44:32.163016     784 provision.go:177] copyRemoteCerts
	I0717 00:44:32.176660     784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 00:44:32.185262     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:32.372288     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:32.504060     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0717 00:44:32.504060     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0717 00:44:32.552347     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0717 00:44:32.553049     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 00:44:32.597554     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0717 00:44:32.603786     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 00:44:32.646521     784 provision.go:87] duration metric: took 962.4948ms to configureAuth
	I0717 00:44:32.648568     784 ubuntu.go:193] setting minikube options for container-runtime
	I0717 00:44:32.648947     784 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:44:32.659852     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:32.860920     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:32.861597     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:32.861597     784 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 00:44:33.059177     784 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 00:44:33.059177     784 ubuntu.go:71] root file system type: overlay
	I0717 00:44:33.059177     784 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 00:44:33.070015     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:33.262115     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:33.262146     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:33.262146     784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 00:44:33.478041     784 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 00:44:33.489558     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:33.683811     784 main.go:141] libmachine: Using SSH client type: native
	I0717 00:44:33.684667     784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 63089 <nil> <nil>}
	I0717 00:44:33.684667     784 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 00:44:33.887507     784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 00:44:33.887593     784 machine.go:97] duration metric: took 3.4535131s to provisionDockerMachine
	I0717 00:44:33.887632     784 start.go:293] postStartSetup for "functional-965000" (driver="docker")
	I0717 00:44:33.887632     784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 00:44:33.901286     784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 00:44:33.911733     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:34.105777     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:34.252789     784 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 00:44:34.264769     784 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0717 00:44:34.264769     784 command_runner.go:130] > NAME="Ubuntu"
	I0717 00:44:34.264769     784 command_runner.go:130] > VERSION_ID="22.04"
	I0717 00:44:34.264769     784 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0717 00:44:34.264769     784 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 00:44:34.264769     784 command_runner.go:130] > ID=ubuntu
	I0717 00:44:34.264769     784 command_runner.go:130] > ID_LIKE=debian
	I0717 00:44:34.264769     784 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 00:44:34.264769     784 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 00:44:34.264769     784 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 00:44:34.264769     784 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 00:44:34.264769     784 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 00:44:34.264769     784 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 00:44:34.264769     784 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 00:44:34.264769     784 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 00:44:34.264769     784 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 00:44:34.264769     784 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0717 00:44:34.266182     784 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0717 00:44:34.267567     784 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem -> 77122.pem in /etc/ssl/certs
	I0717 00:44:34.267567     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem -> /etc/ssl/certs/77122.pem
	I0717 00:44:34.268700     784 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\7712\hosts -> hosts in /etc/test/nested/copy/7712
	I0717 00:44:34.268749     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\7712\hosts -> /etc/test/nested/copy/7712/hosts
	I0717 00:44:34.279387     784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/7712
	I0717 00:44:34.296721     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /etc/ssl/certs/77122.pem (1708 bytes)
	I0717 00:44:34.349311     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\7712\hosts --> /etc/test/nested/copy/7712/hosts (40 bytes)
	I0717 00:44:34.396377     784 start.go:296] duration metric: took 508.7411ms for postStartSetup
	I0717 00:44:34.409343     784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 00:44:34.412313     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:34.608714     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:34.738492     784 command_runner.go:130] > 1%!
	(MISSING)I0717 00:44:34.753813     784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 00:44:34.766845     784 command_runner.go:130] > 951G
	I0717 00:44:34.766845     784 fix.go:56] duration metric: took 4.5298302s for fixHost
	I0717 00:44:34.767388     784 start.go:83] releasing machines lock for "functional-965000", held for 4.5305453s
	I0717 00:44:34.778065     784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-965000
	I0717 00:44:34.973961     784 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0717 00:44:34.983683     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:34.986731     784 ssh_runner.go:195] Run: cat /version.json
	I0717 00:44:34.997310     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:35.194871     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:35.212295     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:35.325637     784 command_runner.go:130] > {"iso_version": "v1.33.1-1721037971-19249", "kicbase_version": "v0.0.44-1721146479-19264", "minikube_version": "v1.33.1", "commit": "6e0d7ef26437c947028f356d4449a323918e966e"}
	I0717 00:44:35.338351     784 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W0717 00:44:35.338351     784 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0717 00:44:35.341142     784 ssh_runner.go:195] Run: systemctl --version
	I0717 00:44:35.355757     784 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0717 00:44:35.355757     784 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0717 00:44:35.371377     784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 00:44:35.383895     784 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 00:44:35.384932     784 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 00:44:35.384932     784 command_runner.go:130] > Device: 91h/145d	Inode: 275         Links: 1
	I0717 00:44:35.385028     784 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 00:44:35.385028     784 command_runner.go:130] > Access: 2024-07-17 00:22:21.502436501 +0000
	I0717 00:44:35.385131     784 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 00:44:35.385165     784 command_runner.go:130] > Change: 2024-07-17 00:21:48.284048209 +0000
	I0717 00:44:35.385165     784 command_runner.go:130] >  Birth: 2024-07-17 00:21:48.284048209 +0000
	I0717 00:44:35.400030     784 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 00:44:35.419140     784 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0717 00:44:35.425348     784 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	W0717 00:44:35.429324     784 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0717 00:44:35.429379     784 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0717 00:44:35.451422     784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 00:44:35.472297     784 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 00:44:35.472297     784 start.go:495] detecting cgroup driver to use...
	I0717 00:44:35.472297     784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:44:35.472297     784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:44:35.505338     784 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0717 00:44:35.519510     784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 00:44:35.556643     784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 00:44:35.580962     784 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 00:44:35.593503     784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 00:44:35.635374     784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 00:44:35.672888     784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 00:44:35.711631     784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 00:44:35.750856     784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 00:44:35.791668     784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 00:44:35.831084     784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 00:44:35.869403     784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 00:44:35.909909     784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 00:44:35.931608     784 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 00:44:35.950380     784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 00:44:35.991451     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:36.177647     784 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 00:44:49.181723     784 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (13.0028752s)
	I0717 00:44:49.181723     784 start.go:495] detecting cgroup driver to use...
	I0717 00:44:49.181723     784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 00:44:49.207468     784 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 00:44:49.240964     784 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0717 00:44:49.241016     784 command_runner.go:130] > [Unit]
	I0717 00:44:49.241065     784 command_runner.go:130] > Description=Docker Application Container Engine
	I0717 00:44:49.241065     784 command_runner.go:130] > Documentation=https://docs.docker.com
	I0717 00:44:49.241148     784 command_runner.go:130] > BindsTo=containerd.service
	I0717 00:44:49.241189     784 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0717 00:44:49.241215     784 command_runner.go:130] > Wants=network-online.target
	I0717 00:44:49.241215     784 command_runner.go:130] > Requires=docker.socket
	I0717 00:44:49.241266     784 command_runner.go:130] > StartLimitBurst=3
	I0717 00:44:49.241266     784 command_runner.go:130] > StartLimitIntervalSec=60
	I0717 00:44:49.241266     784 command_runner.go:130] > [Service]
	I0717 00:44:49.241342     784 command_runner.go:130] > Type=notify
	I0717 00:44:49.241342     784 command_runner.go:130] > Restart=on-failure
	I0717 00:44:49.241342     784 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0717 00:44:49.241461     784 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0717 00:44:49.241489     784 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0717 00:44:49.241541     784 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0717 00:44:49.241572     784 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0717 00:44:49.241572     784 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0717 00:44:49.241609     784 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0717 00:44:49.241609     784 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0717 00:44:49.241668     784 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0717 00:44:49.241668     784 command_runner.go:130] > ExecStart=
	I0717 00:44:49.241724     784 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0717 00:44:49.241771     784 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0717 00:44:49.241771     784 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0717 00:44:49.241771     784 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0717 00:44:49.241829     784 command_runner.go:130] > LimitNOFILE=infinity
	I0717 00:44:49.241829     784 command_runner.go:130] > LimitNPROC=infinity
	I0717 00:44:49.241829     784 command_runner.go:130] > LimitCORE=infinity
	I0717 00:44:49.241829     784 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0717 00:44:49.241896     784 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0717 00:44:49.241896     784 command_runner.go:130] > TasksMax=infinity
	I0717 00:44:49.241896     784 command_runner.go:130] > TimeoutStartSec=0
	I0717 00:44:49.241896     784 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0717 00:44:49.241896     784 command_runner.go:130] > Delegate=yes
	I0717 00:44:49.241896     784 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0717 00:44:49.241896     784 command_runner.go:130] > KillMode=process
	I0717 00:44:49.241896     784 command_runner.go:130] > [Install]
	I0717 00:44:49.241896     784 command_runner.go:130] > WantedBy=multi-user.target
	I0717 00:44:49.241896     784 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0717 00:44:49.260146     784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 00:44:49.291048     784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 00:44:49.332369     784 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0717 00:44:49.351139     784 ssh_runner.go:195] Run: which cri-dockerd
	I0717 00:44:49.373891     784 command_runner.go:130] > /usr/bin/cri-dockerd
	I0717 00:44:49.392868     784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 00:44:49.419063     784 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 00:44:49.486657     784 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 00:44:49.682272     784 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 00:44:49.904832     784 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 00:44:49.905033     784 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 00:44:49.967325     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:50.151587     784 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 00:44:51.061985     784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0717 00:44:51.108169     784 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0717 00:44:51.160707     784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 00:44:51.207663     784 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 00:44:51.375327     784 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 00:44:51.543056     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:51.717805     784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 00:44:51.765578     784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0717 00:44:51.816403     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:52.064907     784 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0717 00:44:52.240448     784 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 00:44:52.254461     784 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 00:44:52.269547     784 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0717 00:44:52.269547     784 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 00:44:52.269547     784 command_runner.go:130] > Device: 9ah/154d	Inode: 718         Links: 1
	I0717 00:44:52.269547     784 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0717 00:44:52.269547     784 command_runner.go:130] > Access: 2024-07-17 00:44:52.216880638 +0000
	I0717 00:44:52.269547     784 command_runner.go:130] > Modify: 2024-07-17 00:44:52.086869750 +0000
	I0717 00:44:52.269547     784 command_runner.go:130] > Change: 2024-07-17 00:44:52.086869750 +0000
	I0717 00:44:52.269547     784 command_runner.go:130] >  Birth: -
	I0717 00:44:52.269547     784 start.go:563] Will wait 60s for crictl version
	I0717 00:44:52.284249     784 ssh_runner.go:195] Run: which crictl
	I0717 00:44:52.304665     784 command_runner.go:130] > /usr/bin/crictl
	I0717 00:44:52.320432     784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 00:44:52.420918     784 command_runner.go:130] > Version:  0.1.0
	I0717 00:44:52.420918     784 command_runner.go:130] > RuntimeName:  docker
	I0717 00:44:52.420918     784 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0717 00:44:52.420918     784 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 00:44:52.433172     784 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0717 00:44:52.442754     784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 00:44:52.505103     784 command_runner.go:130] > 27.0.3
	I0717 00:44:52.515971     784 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 00:44:52.581241     784 command_runner.go:130] > 27.0.3
	I0717 00:44:52.585004     784 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0717 00:44:52.595619     784 cli_runner.go:164] Run: docker exec -t functional-965000 dig +short host.docker.internal
	I0717 00:44:52.862471     784 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 00:44:52.874182     784 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 00:44:52.892253     784 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0717 00:44:52.903354     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:53.091298     784 kubeadm.go:883] updating cluster {Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 00:44:53.091993     784 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:44:53.102049     784 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 00:44:53.153608     784 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 00:44:53.153608     784 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:44:53.153608     784 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 00:44:53.153608     784 docker.go:615] Images already preloaded, skipping extraction
	I0717 00:44:53.163805     784 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.2
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.2
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.2
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.2
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0717 00:44:53.210684     784 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0717 00:44:53.210684     784 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:44:53.210684     784 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 00:44:53.210684     784 cache_images.go:84] Images are preloaded, skipping loading
	I0717 00:44:53.210684     784 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.30.2 docker true true} ...
	I0717 00:44:53.211426     784 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-965000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 00:44:53.220978     784 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 00:44:53.328379     784 command_runner.go:130] > cgroupfs
	I0717 00:44:53.328908     784 cni.go:84] Creating CNI manager for ""
	I0717 00:44:53.328908     784 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:44:53.329009     784 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 00:44:53.329082     784 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-965000 NodeName:functional-965000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 00:44:53.329268     784 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-965000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 00:44:53.343707     784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0717 00:44:53.365366     784 command_runner.go:130] > kubeadm
	I0717 00:44:53.365366     784 command_runner.go:130] > kubectl
	I0717 00:44:53.365366     784 command_runner.go:130] > kubelet
	I0717 00:44:53.365366     784 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 00:44:53.376911     784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 00:44:53.397219     784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0717 00:44:53.434865     784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 00:44:53.465276     784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0717 00:44:53.509514     784 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 00:44:53.522158     784 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0717 00:44:53.534434     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:53.687208     784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:44:53.712120     784 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000 for IP: 192.168.49.2
	I0717 00:44:53.712269     784 certs.go:194] generating shared ca certs ...
	I0717 00:44:53.712269     784 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:53.713065     784 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0717 00:44:53.713564     784 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0717 00:44:53.713787     784 certs.go:256] generating profile certs ...
	I0717 00:44:53.714507     784 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.key
	I0717 00:44:53.715552     784 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\apiserver.key.50d891a6
	I0717 00:44:53.715943     784 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\proxy-client.key
	I0717 00:44:53.715943     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 00:44:53.716104     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0717 00:44:53.716331     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 00:44:53.716331     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 00:44:53.716331     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 00:44:53.716331     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 00:44:53.716907     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 00:44:53.717083     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 00:44:53.717083     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem (1338 bytes)
	W0717 00:44:53.717805     784 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712_empty.pem, impossibly tiny 0 bytes
	I0717 00:44:53.718022     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0717 00:44:53.718049     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0717 00:44:53.718588     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0717 00:44:53.718908     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0717 00:44:53.719011     784 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem (1708 bytes)
	I0717 00:44:53.719011     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem -> /usr/share/ca-certificates/7712.pem
	I0717 00:44:53.719760     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem -> /usr/share/ca-certificates/77122.pem
	I0717 00:44:53.719941     784 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:53.721001     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 00:44:53.763424     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 00:44:53.806355     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 00:44:53.849897     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 00:44:53.897479     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0717 00:44:53.942872     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 00:44:53.981671     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 00:44:54.031326     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 00:44:54.077537     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem --> /usr/share/ca-certificates/7712.pem (1338 bytes)
	I0717 00:44:54.120072     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /usr/share/ca-certificates/77122.pem (1708 bytes)
	I0717 00:44:54.165293     784 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 00:44:54.207778     784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 00:44:54.254283     784 ssh_runner.go:195] Run: openssl version
	I0717 00:44:54.270132     784 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 00:44:54.282418     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7712.pem && ln -fs /usr/share/ca-certificates/7712.pem /etc/ssl/certs/7712.pem"
	I0717 00:44:54.315583     784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7712.pem
	I0717 00:44:54.327142     784 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 00:42 /usr/share/ca-certificates/7712.pem
	I0717 00:44:54.327142     784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:42 /usr/share/ca-certificates/7712.pem
	I0717 00:44:54.341682     784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7712.pem
	I0717 00:44:54.355262     784 command_runner.go:130] > 51391683
	I0717 00:44:54.368008     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7712.pem /etc/ssl/certs/51391683.0"
	I0717 00:44:54.400759     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77122.pem && ln -fs /usr/share/ca-certificates/77122.pem /etc/ssl/certs/77122.pem"
	I0717 00:44:54.435289     784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77122.pem
	I0717 00:44:54.452236     784 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 00:42 /usr/share/ca-certificates/77122.pem
	I0717 00:44:54.453661     784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:42 /usr/share/ca-certificates/77122.pem
	I0717 00:44:54.467233     784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77122.pem
	I0717 00:44:54.482352     784 command_runner.go:130] > 3ec20f2e
	I0717 00:44:54.494260     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77122.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 00:44:54.527177     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 00:44:54.562255     784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:54.575010     784 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:54.575083     784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:54.586496     784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 00:44:54.600752     784 command_runner.go:130] > b5213941
	I0717 00:44:54.612955     784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 00:44:54.646608     784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:44:54.661425     784 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 00:44:54.661454     784 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0717 00:44:54.661454     784 command_runner.go:130] > Device: 830h/2096d	Inode: 19306       Links: 1
	I0717 00:44:54.661454     784 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 00:44:54.661454     784 command_runner.go:130] > Access: 2024-07-17 00:43:45.314191793 +0000
	I0717 00:44:54.661454     784 command_runner.go:130] > Modify: 2024-07-17 00:43:45.314191793 +0000
	I0717 00:44:54.661454     784 command_runner.go:130] > Change: 2024-07-17 00:43:45.314191793 +0000
	I0717 00:44:54.661454     784 command_runner.go:130] >  Birth: 2024-07-17 00:43:45.314191793 +0000
	I0717 00:44:54.673488     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 00:44:54.687572     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.699552     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 00:44:54.713460     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.725377     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 00:44:54.743596     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.755322     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 00:44:54.768295     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.781744     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 00:44:54.794673     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.809433     784 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 00:44:54.825232     784 command_runner.go:130] > Certificate will not expire
	I0717 00:44:54.825877     784 kubeadm.go:392] StartCluster: {Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:44:54.835499     784 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 00:44:54.892068     784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 00:44:54.914119     784 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0717 00:44:54.914119     784 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0717 00:44:54.914119     784 command_runner.go:130] > /var/lib/minikube/etcd:
	I0717 00:44:54.914119     784 command_runner.go:130] > member
	I0717 00:44:54.914119     784 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 00:44:54.914119     784 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 00:44:54.925644     784 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 00:44:54.955332     784 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 00:44:54.965948     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:55.165523     784 kubeconfig.go:125] found "functional-965000" server: "https://127.0.0.1:63088"
	I0717 00:44:55.167021     784 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:44:55.167823     784 kapi.go:59] client config for functional-965000: &rest.Config{Host:"https://127.0.0.1:63088", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-965000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-965000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27055a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:44:55.169285     784 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 00:44:55.182530     784 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 00:44:55.204798     784 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0717 00:44:55.204798     784 kubeadm.go:597] duration metric: took 290.6765ms to restartPrimaryControlPlane
	I0717 00:44:55.204798     784 kubeadm.go:394] duration metric: took 378.9178ms to StartCluster
	I0717 00:44:55.204798     784 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:55.204798     784 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:44:55.206209     784 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:44:55.207679     784 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 00:44:55.207679     784 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 00:44:55.207948     784 addons.go:69] Setting default-storageclass=true in profile "functional-965000"
	I0717 00:44:55.207903     784 addons.go:69] Setting storage-provisioner=true in profile "functional-965000"
	I0717 00:44:55.208077     784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-965000"
	I0717 00:44:55.208191     784 addons.go:234] Setting addon storage-provisioner=true in "functional-965000"
	W0717 00:44:55.208553     784 addons.go:243] addon storage-provisioner should already be in state true
	I0717 00:44:55.208620     784 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:44:55.208679     784 host.go:66] Checking if "functional-965000" exists ...
	I0717 00:44:55.215989     784 out.go:177] * Verifying Kubernetes components...
	I0717 00:44:55.231197     784 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
	I0717 00:44:55.231197     784 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
	I0717 00:44:55.233856     784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 00:44:55.434895     784 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:44:55.435419     784 kapi.go:59] client config for functional-965000: &rest.Config{Host:"https://127.0.0.1:63088", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-965000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-965000\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27055a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 00:44:55.436508     784 addons.go:234] Setting addon default-storageclass=true in "functional-965000"
	W0717 00:44:55.436508     784 addons.go:243] addon default-storageclass should already be in state true
	I0717 00:44:55.436508     784 host.go:66] Checking if "functional-965000" exists ...
	I0717 00:44:55.446245     784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 00:44:55.453302     784 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 00:44:55.459472     784 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:55.459472     784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 00:44:55.468919     784 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
	I0717 00:44:55.470958     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:55.498329     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:55.683134     784 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:55.683134     784 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 00:44:55.699321     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:55.700076     784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
	I0717 00:44:55.700302     784 node_ready.go:35] waiting up to 6m0s for node "functional-965000" to be "Ready" ...
	I0717 00:44:55.700648     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:55.700648     784 round_trippers.go:469] Request Headers:
	I0717 00:44:55.700648     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:55.700773     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:44:55.703487     784 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0717 00:44:55.703487     784 round_trippers.go:577] Response Headers:
	I0717 00:44:55.888961     784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
	I0717 00:44:55.899617     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:56.021777     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:56.028037     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.028037     784 retry.go:31] will retry after 145.703033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.081200     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:56.185161     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.190640     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 00:44:56.192046     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.192046     784 retry.go:31] will retry after 231.971486ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.288468     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:56.292240     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.292240     784 retry.go:31] will retry after 529.405079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.441563     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:56.539945     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:56.547123     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.547180     784 retry.go:31] will retry after 375.973414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.715880     784 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:56.716073     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:56.716158     784 round_trippers.go:469] Request Headers:
	I0717 00:44:56.716232     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:56.716232     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:44:56.719954     784 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0717 00:44:56.720119     784 round_trippers.go:577] Response Headers:
	I0717 00:44:56.838672     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:56.930918     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:56.936293     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.936522     784 retry.go:31] will retry after 567.788474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:56.943944     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:57.048772     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:57.055181     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:57.055181     784 retry.go:31] will retry after 774.367413ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:57.525486     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:57.621484     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:57.627925     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:57.627988     784 retry.go:31] will retry after 902.309519ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:57.735783     784 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:57.735860     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:57.735860     784 round_trippers.go:469] Request Headers:
	I0717 00:44:57.735860     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:57.735860     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:44:57.738695     784 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0717 00:44:57.738695     784 round_trippers.go:577] Response Headers:
	I0717 00:44:57.846234     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:57.952493     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:57.952582     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:57.952712     784 retry.go:31] will retry after 945.758503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:58.572356     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:44:58.750490     784 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:58.750490     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:58.750490     784 round_trippers.go:469] Request Headers:
	I0717 00:44:58.750490     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:58.750490     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:44:58.754505     784 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0717 00:44:58.754505     784 round_trippers.go:577] Response Headers:
	I0717 00:44:58.925937     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:44:59.198157     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:59.206131     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:59.206856     784 retry.go:31] will retry after 1.359401187s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:59.615404     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:44:59.621656     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:59.621656     784 retry.go:31] will retry after 1.440513354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:44:59.762565     784 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:59.762565     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:44:59.762777     784 round_trippers.go:469] Request Headers:
	I0717 00:44:59.762819     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:44:59.762819     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:44:59.765976     784 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0717 00:44:59.765976     784 round_trippers.go:577] Response Headers:
	I0717 00:45:00.587996     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:45:00.781044     784 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:00.781473     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:00.781473     784 round_trippers.go:469] Request Headers:
	I0717 00:45:00.781473     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:00.781473     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:00.786483     784 round_trippers.go:574] Response Status:  in 5 milliseconds
	I0717 00:45:00.786483     784 round_trippers.go:577] Response Headers:
	I0717 00:45:01.080830     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 00:45:01.115997     784 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0717 00:45:01.119023     784 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:45:01.119023     784 retry.go:31] will retry after 1.525298222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0717 00:45:01.789581     784 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:01.789767     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:01.789767     784 round_trippers.go:469] Request Headers:
	I0717 00:45:01.789767     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:01.789767     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:02.666534     784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 00:45:05.402995     784 round_trippers.go:574] Response Status: 200 OK in 3613 milliseconds
	I0717 00:45:05.402995     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.402995     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0717 00:45:05.402995     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0717 00:45:05.402995     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.402995     784 round_trippers.go:580]     Audit-Id: bc304d17-3c22-414f-bc91-2148da740fec
	I0717 00:45:05.402995     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.402995     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.404158     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:05.405400     784 node_ready.go:49] node "functional-965000" has status "Ready":"True"
	I0717 00:45:05.405458     784 node_ready.go:38] duration metric: took 9.7050328s for node "functional-965000" to be "Ready" ...
	I0717 00:45:05.405522     784 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:45:05.405756     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods
	I0717 00:45:05.405819     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.405819     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.405819     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.507616     784 round_trippers.go:574] Response Status: 200 OK in 101 milliseconds
	I0717 00:45:05.507647     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.507647     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0717 00:45:05.507745     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.507745     784 round_trippers.go:580]     Audit-Id: 621553a6-8d4c-4d89-b1bb-3b58616c83b9
	I0717 00:45:05.507745     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.507745     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.507819     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0717 00:45:05.508923     784 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"416"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wz2jh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"68da0227-f0f9-4feb-a17c-6282b313b353","resourceVersion":"391","creationTimestamp":"2024-07-17T00:44:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b84c9d6-b98a-4a91-b9d9-06ef869db86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:44:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b84c9d6-b98a-4a91-b9d9-06ef869db86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 50306 chars]
	I0717 00:45:05.516662     784 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wz2jh" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:05.516949     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wz2jh
	I0717 00:45:05.517074     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.517074     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.517074     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.716997     784 round_trippers.go:574] Response Status: 200 OK in 199 milliseconds
	I0717 00:45:05.716997     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.716997     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.717181     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.717181     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.717181     784 round_trippers.go:580]     Audit-Id: ac22d868-fbb7-4342-ae94-2050af22c098
	I0717 00:45:05.717181     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.717181     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.717626     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-wz2jh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"68da0227-f0f9-4feb-a17c-6282b313b353","resourceVersion":"391","creationTimestamp":"2024-07-17T00:44:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b84c9d6-b98a-4a91-b9d9-06ef869db86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:44:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b84c9d6-b98a-4a91-b9d9-06ef869db86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6239 chars]
	I0717 00:45:05.718298     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:05.718467     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.718467     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.718467     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.799262     784 round_trippers.go:574] Response Status: 200 OK in 80 milliseconds
	I0717 00:45:05.799485     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.799784     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.799944     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.800050     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.800270     784 round_trippers.go:580]     Audit-Id: f5a7a14e-fb64-436f-a101-4a4214abb420
	I0717 00:45:05.800313     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.800313     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.800374     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:05.802032     784 pod_ready.go:92] pod "coredns-7db6d8ff4d-wz2jh" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:05.802085     784 pod_ready.go:81] duration metric: took 285.3099ms for pod "coredns-7db6d8ff4d-wz2jh" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:05.802085     784 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:05.802324     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/etcd-functional-965000
	I0717 00:45:05.802378     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.802378     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.802463     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.810928     784 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:45:05.810928     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.810928     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.810928     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.810928     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.810928     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.810928     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.810928     784 round_trippers.go:580]     Audit-Id: c3d17d02-619f-453f-a094-1d392fbe0bd3
	I0717 00:45:05.810928     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-965000","namespace":"kube-system","uid":"6771d196-310e-4275-ab61-1f295b066578","resourceVersion":"286","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3823ded6d2fcdfc98e53ff27b6721e6a","kubernetes.io/config.mirror":"3823ded6d2fcdfc98e53ff27b6721e6a","kubernetes.io/config.seen":"2024-07-17T00:43:57.645332058Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6153 chars]
	I0717 00:45:05.812195     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:05.812195     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.812195     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.812195     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.898811     784 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0717 00:45:05.898811     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.898811     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.898811     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.898811     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.898811     784 round_trippers.go:580]     Audit-Id: cd2bc84b-a152-4f63-96e9-52b07135ec4b
	I0717 00:45:05.898811     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.898811     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.899455     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:05.900330     784 pod_ready.go:92] pod "etcd-functional-965000" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:05.900330     784 pod_ready.go:81] duration metric: took 98.2448ms for pod "etcd-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:05.900427     784 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:05.900698     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:05.900765     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.900765     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.900765     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.910430     784 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:45:05.910430     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.910531     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.910592     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.910678     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.910733     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.910733     784 round_trippers.go:580]     Audit-Id: 4882c2b9-a72d-4263-a34c-90188ae9df68
	I0717 00:45:05.910733     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.911236     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:05.911832     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:05.911832     784 round_trippers.go:469] Request Headers:
	I0717 00:45:05.912380     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:05.912380     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:05.996658     784 round_trippers.go:574] Response Status: 200 OK in 84 milliseconds
	I0717 00:45:05.996658     784 round_trippers.go:577] Response Headers:
	I0717 00:45:05.996658     784 round_trippers.go:580]     Audit-Id: 49b6ff7f-77a1-4492-b250-334450b9ca56
	I0717 00:45:05.996658     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:05.996658     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:05.996658     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:05.996658     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:05.996658     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:05 GMT
	I0717 00:45:05.997224     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:06.414330     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:06.414434     784 round_trippers.go:469] Request Headers:
	I0717 00:45:06.414434     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:06.414434     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:06.426760     784 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:45:06.426926     784 round_trippers.go:577] Response Headers:
	I0717 00:45:06.426926     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:06.426926     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:06 GMT
	I0717 00:45:06.426926     784 round_trippers.go:580]     Audit-Id: 2a07af52-7723-47e1-aba5-c53875c66716
	I0717 00:45:06.426986     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:06.426986     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:06.426986     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:06.427584     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:06.428657     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:06.428657     784 round_trippers.go:469] Request Headers:
	I0717 00:45:06.428657     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:06.428657     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:06.434810     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:06.435014     784 round_trippers.go:577] Response Headers:
	I0717 00:45:06.435014     784 round_trippers.go:580]     Audit-Id: b506fd36-62bd-40bc-b933-1189c276f329
	I0717 00:45:06.435014     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:06.435089     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:06.435089     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:06.435089     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:06.435154     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:06 GMT
	I0717 00:45:06.435311     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:06.902758     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:06.902821     784 round_trippers.go:469] Request Headers:
	I0717 00:45:06.902901     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:06.902901     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:06.996330     784 round_trippers.go:574] Response Status: 200 OK in 93 milliseconds
	I0717 00:45:06.996434     784 round_trippers.go:577] Response Headers:
	I0717 00:45:06.996434     784 round_trippers.go:580]     Audit-Id: 92c8780b-46b7-4c96-b96c-56774c6c849f
	I0717 00:45:06.996434     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:06.996434     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:06.996434     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:06.996521     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:06.996521     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:06 GMT
	I0717 00:45:06.997384     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:06.998756     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:06.998756     784 round_trippers.go:469] Request Headers:
	I0717 00:45:06.998756     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:06.998756     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.011746     784 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:45:07.011746     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.011746     784 round_trippers.go:580]     Audit-Id: deabf3e6-b7bd-45a8-9ff0-ec19d2bd9d5f
	I0717 00:45:07.011746     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.011746     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.011746     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.011746     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.011746     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.011746     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:07.401824     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:07.401824     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.401824     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.401824     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.411236     784 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:45:07.411236     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.411236     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.411236     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.411236     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.411236     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.411236     784 round_trippers.go:580]     Audit-Id: f72b5aa8-620c-4ae5-b421-19e2ad099f55
	I0717 00:45:07.413155     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.413634     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:07.414954     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:07.415021     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.415107     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.415107     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.427462     784 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0717 00:45:07.427462     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.427462     784 round_trippers.go:580]     Audit-Id: fdb17b9c-bb14-44da-ac7e-92c0fdf4bf6c
	I0717 00:45:07.427462     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.427462     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.427462     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.427462     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.427462     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.427462     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:07.615453     784 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0717 00:45:07.620395     784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.5393927s)
	I0717 00:45:07.620924     784 round_trippers.go:463] GET https://127.0.0.1:63088/apis/storage.k8s.io/v1/storageclasses
	I0717 00:45:07.620924     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.620924     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.620924     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.696030     784 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0717 00:45:07.696030     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.696030     784 round_trippers.go:580]     Audit-Id: 3d3635c6-8e67-4d79-95dc-ff86072c3022
	I0717 00:45:07.696030     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.696030     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.696030     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.696030     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.696030     784 round_trippers.go:580]     Content-Length: 1273
	I0717 00:45:07.696030     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.696030     784 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"483"},"items":[{"metadata":{"name":"standard","uid":"3997a231-d9f6-48b3-8714-c384ec188573","resourceVersion":"350","creationTimestamp":"2024-07-17T00:44:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:44:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0717 00:45:07.698125     784 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3997a231-d9f6-48b3-8714-c384ec188573","resourceVersion":"350","creationTimestamp":"2024-07-17T00:44:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:44:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0717 00:45:07.698125     784 round_trippers.go:463] PUT https://127.0.0.1:63088/apis/storage.k8s.io/v1/storageclasses/standard
	I0717 00:45:07.698125     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.698125     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.698125     784 round_trippers.go:473]     Content-Type: application/json
	I0717 00:45:07.698125     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.708626     784 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 00:45:07.708699     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.708699     784 round_trippers.go:580]     Audit-Id: 3305fd6b-39cd-4827-8b5b-421de4589eb5
	I0717 00:45:07.708782     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.708782     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.708782     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.708782     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.708834     784 round_trippers.go:580]     Content-Length: 1220
	I0717 00:45:07.708834     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.708992     784 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3997a231-d9f6-48b3-8714-c384ec188573","resourceVersion":"350","creationTimestamp":"2024-07-17T00:44:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-17T00:44:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0717 00:45:07.899759     784 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0717 00:45:07.899918     784 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0717 00:45:07.899918     784 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0717 00:45:07.899918     784 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0717 00:45:07.899918     784 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0717 00:45:07.899918     784 command_runner.go:130] > pod/storage-provisioner configured
	I0717 00:45:07.900065     784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.2333412s)
	I0717 00:45:07.904058     784 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0717 00:45:07.904728     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:07.904728     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.904813     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.904813     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.906428     784 addons.go:510] duration metric: took 12.6986904s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0717 00:45:07.910548     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:07.910548     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.910548     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.910548     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.910548     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.910548     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.910548     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.910548     784 round_trippers.go:580]     Audit-Id: e55acd71-565d-4356-87c7-dd6561a3f6b9
	I0717 00:45:07.911949     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:07.913633     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:07.913704     784 round_trippers.go:469] Request Headers:
	I0717 00:45:07.913792     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:07.913792     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:07.921499     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:07.922063     784 round_trippers.go:577] Response Headers:
	I0717 00:45:07.922063     784 round_trippers.go:580]     Audit-Id: a2891ff7-c041-4f6e-a73c-1f656c83a936
	I0717 00:45:07.922063     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:07.922063     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:07.922063     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:07.922063     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:07.922143     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:07 GMT
	I0717 00:45:07.922328     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:07.922386     784 pod_ready.go:102] pod "kube-apiserver-functional-965000" in "kube-system" namespace has status "Ready":"False"
	I0717 00:45:08.404245     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:08.404245     784 round_trippers.go:469] Request Headers:
	I0717 00:45:08.404245     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:08.404245     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:08.411094     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:08.411094     784 round_trippers.go:577] Response Headers:
	I0717 00:45:08.411194     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:08.411194     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:08.411194     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:08 GMT
	I0717 00:45:08.411265     784 round_trippers.go:580]     Audit-Id: f5f28b0f-8788-49ea-953d-4b4a0eaf29b0
	I0717 00:45:08.411265     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:08.411310     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:08.411704     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:08.412903     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:08.413060     784 round_trippers.go:469] Request Headers:
	I0717 00:45:08.413060     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:08.413060     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:08.417738     784 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:45:08.417738     784 round_trippers.go:577] Response Headers:
	I0717 00:45:08.417738     784 round_trippers.go:580]     Audit-Id: f9a0f27d-3fcf-4d0f-a273-eae03577e2d4
	I0717 00:45:08.417738     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:08.417738     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:08.417738     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:08.417738     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:08.417738     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:08 GMT
	I0717 00:45:08.418402     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:08.910219     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:08.910314     784 round_trippers.go:469] Request Headers:
	I0717 00:45:08.910314     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:08.910314     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:08.917948     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:08.918064     784 round_trippers.go:577] Response Headers:
	I0717 00:45:08.918064     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:08.918064     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:08 GMT
	I0717 00:45:08.918166     784 round_trippers.go:580]     Audit-Id: 57ebc08e-ae3d-4431-986b-d9d8eb1bf351
	I0717 00:45:08.918187     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:08.918187     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:08.918187     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:08.918341     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:08.918341     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:08.918341     784 round_trippers.go:469] Request Headers:
	I0717 00:45:08.918341     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:08.918341     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:08.925293     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:08.925293     784 round_trippers.go:577] Response Headers:
	I0717 00:45:08.925404     784 round_trippers.go:580]     Audit-Id: ac3b0454-b08e-4f41-8934-3d544cb00ab8
	I0717 00:45:08.925404     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:08.925404     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:08.925404     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:08.925404     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:08.925404     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:08 GMT
	I0717 00:45:08.925888     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:09.411451     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:09.411774     784 round_trippers.go:469] Request Headers:
	I0717 00:45:09.411774     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:09.411774     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:09.418210     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:09.418210     784 round_trippers.go:577] Response Headers:
	I0717 00:45:09.418210     784 round_trippers.go:580]     Audit-Id: 3299fabf-cacd-41ae-a651-38b44e0e3556
	I0717 00:45:09.418210     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:09.418210     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:09.418210     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:09.418210     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:09.418210     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:09 GMT
	I0717 00:45:09.418210     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:09.419394     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:09.419394     784 round_trippers.go:469] Request Headers:
	I0717 00:45:09.419394     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:09.419394     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:09.425632     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:09.425632     784 round_trippers.go:577] Response Headers:
	I0717 00:45:09.425632     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:09.425632     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:09.425632     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:09.425632     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:09 GMT
	I0717 00:45:09.425632     784 round_trippers.go:580]     Audit-Id: d3c34957-e5a2-4e8c-9bab-3dcc7c332393
	I0717 00:45:09.425632     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:09.426333     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:09.914391     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:09.914633     784 round_trippers.go:469] Request Headers:
	I0717 00:45:09.914633     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:09.914633     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:09.920995     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:09.920995     784 round_trippers.go:577] Response Headers:
	I0717 00:45:09.920995     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:09.920995     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:09 GMT
	I0717 00:45:09.920995     784 round_trippers.go:580]     Audit-Id: 84232422-f11d-4cd9-ab64-9123b31a8a9e
	I0717 00:45:09.920995     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:09.920995     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:09.920995     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:09.921657     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:09.922113     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:09.922113     784 round_trippers.go:469] Request Headers:
	I0717 00:45:09.922113     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:09.922113     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:09.927325     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:09.927325     784 round_trippers.go:577] Response Headers:
	I0717 00:45:09.927325     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:09 GMT
	I0717 00:45:09.927325     784 round_trippers.go:580]     Audit-Id: e74f5b9b-b1d5-4f97-b272-e7672ac3491b
	I0717 00:45:09.927325     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:09.927325     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:09.927325     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:09.927325     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:09.928081     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:09.928081     784 pod_ready.go:102] pod "kube-apiserver-functional-965000" in "kube-system" namespace has status "Ready":"False"
	I0717 00:45:10.410649     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:10.410649     784 round_trippers.go:469] Request Headers:
	I0717 00:45:10.410649     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:10.410730     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:10.418580     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:10.418639     784 round_trippers.go:577] Response Headers:
	I0717 00:45:10.418639     784 round_trippers.go:580]     Audit-Id: d2c3a97c-6eba-4c2d-95e5-e1c5786609d9
	I0717 00:45:10.418639     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:10.418691     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:10.418691     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:10.418707     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:10.418707     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:10 GMT
	I0717 00:45:10.421069     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:10.421900     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:10.421900     784 round_trippers.go:469] Request Headers:
	I0717 00:45:10.421900     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:10.421900     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:10.426549     784 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:45:10.426549     784 round_trippers.go:577] Response Headers:
	I0717 00:45:10.426549     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:10.426549     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:10.426549     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:10 GMT
	I0717 00:45:10.426549     784 round_trippers.go:580]     Audit-Id: 315ecd4e-aebb-459d-84ef-0599922ffde6
	I0717 00:45:10.427592     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:10.427592     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:10.427846     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:10.907575     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:10.907575     784 round_trippers.go:469] Request Headers:
	I0717 00:45:10.907575     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:10.907575     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:10.913697     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:10.913697     784 round_trippers.go:577] Response Headers:
	I0717 00:45:10.913697     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:10.913697     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:10.913697     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:10 GMT
	I0717 00:45:10.913697     784 round_trippers.go:580]     Audit-Id: 8d970c1a-dd35-47c8-8003-13ad68b9df5b
	I0717 00:45:10.913697     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:10.913697     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:10.914880     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:10.915571     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:10.915571     784 round_trippers.go:469] Request Headers:
	I0717 00:45:10.915571     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:10.915571     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:10.922293     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:10.922293     784 round_trippers.go:577] Response Headers:
	I0717 00:45:10.922293     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:10 GMT
	I0717 00:45:10.922293     784 round_trippers.go:580]     Audit-Id: dd560d76-681b-4f10-936b-990a4e05bf4f
	I0717 00:45:10.922293     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:10.922293     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:10.922293     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:10.922293     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:10.922293     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:11.409723     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:11.409834     784 round_trippers.go:469] Request Headers:
	I0717 00:45:11.409834     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:11.409834     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:11.416776     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:11.416776     784 round_trippers.go:577] Response Headers:
	I0717 00:45:11.416840     784 round_trippers.go:580]     Audit-Id: 6befd78b-7bc8-4941-9950-babb58ca4884
	I0717 00:45:11.416840     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:11.416866     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:11.416866     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:11.416866     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:11.416866     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:11 GMT
	I0717 00:45:11.417098     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:11.417956     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:11.417976     784 round_trippers.go:469] Request Headers:
	I0717 00:45:11.417976     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:11.417976     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:11.425547     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:11.425547     784 round_trippers.go:577] Response Headers:
	I0717 00:45:11.425547     784 round_trippers.go:580]     Audit-Id: d4d4c28b-15c5-4a8c-9e36-78721458aabb
	I0717 00:45:11.425547     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:11.425547     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:11.425547     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:11.425547     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:11.425547     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:11 GMT
	I0717 00:45:11.425547     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:11.912145     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:11.912145     784 round_trippers.go:469] Request Headers:
	I0717 00:45:11.912145     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:11.912145     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:11.920300     784 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:45:11.920841     784 round_trippers.go:577] Response Headers:
	I0717 00:45:11.920841     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:11.920841     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:11.920841     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:11.920841     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:11.920841     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:11 GMT
	I0717 00:45:11.920948     784 round_trippers.go:580]     Audit-Id: 8e05af22-ec36-42b0-9eb2-e0159b316e43
	I0717 00:45:11.921910     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:11.922653     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:11.922709     784 round_trippers.go:469] Request Headers:
	I0717 00:45:11.922709     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:11.922827     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:11.929337     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:11.929337     784 round_trippers.go:577] Response Headers:
	I0717 00:45:11.929337     784 round_trippers.go:580]     Audit-Id: a7fb831e-74b6-45b0-8c76-e332f79aef8d
	I0717 00:45:11.929337     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:11.929337     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:11.929337     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:11.929337     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:11.929337     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:11 GMT
	I0717 00:45:11.930182     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:11.930381     784 pod_ready.go:102] pod "kube-apiserver-functional-965000" in "kube-system" namespace has status "Ready":"False"
	I0717 00:45:12.415969     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:12.415969     784 round_trippers.go:469] Request Headers:
	I0717 00:45:12.415969     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:12.415969     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:12.422878     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:12.423086     784 round_trippers.go:577] Response Headers:
	I0717 00:45:12.423086     784 round_trippers.go:580]     Audit-Id: bd85d952-9089-4b80-8f1b-fb71529c137e
	I0717 00:45:12.423086     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:12.423086     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:12.423086     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:12.423086     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:12.423147     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:12 GMT
	I0717 00:45:12.423612     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:12.424361     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:12.424448     784 round_trippers.go:469] Request Headers:
	I0717 00:45:12.424448     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:12.424448     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:12.433584     784 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:45:12.433584     784 round_trippers.go:577] Response Headers:
	I0717 00:45:12.433584     784 round_trippers.go:580]     Audit-Id: 26718cba-68a7-4635-bff4-56ab92c62ee2
	I0717 00:45:12.433584     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:12.433584     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:12.433584     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:12.433584     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:12.433584     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:12 GMT
	I0717 00:45:12.434416     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:12.906930     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:12.906930     784 round_trippers.go:469] Request Headers:
	I0717 00:45:12.907034     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:12.907034     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:12.913589     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:12.913675     784 round_trippers.go:577] Response Headers:
	I0717 00:45:12.913675     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:12.913675     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:12.913675     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:12.913675     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:12 GMT
	I0717 00:45:12.913796     784 round_trippers.go:580]     Audit-Id: 3e39cd91-4a14-4cb0-9a69-de366e7f8d2e
	I0717 00:45:12.913796     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:12.913905     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:12.914664     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:12.914664     784 round_trippers.go:469] Request Headers:
	I0717 00:45:12.914664     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:12.914664     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:12.921338     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:12.921338     784 round_trippers.go:577] Response Headers:
	I0717 00:45:12.921338     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:12.921338     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:12.921338     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:12.921338     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:12.921338     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:12 GMT
	I0717 00:45:12.921338     784 round_trippers.go:580]     Audit-Id: 69cd70aa-7d42-4bf3-a11a-df35b388a3de
	I0717 00:45:12.921878     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:13.408116     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:13.408116     784 round_trippers.go:469] Request Headers:
	I0717 00:45:13.408116     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:13.408116     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:13.414804     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:13.414900     784 round_trippers.go:577] Response Headers:
	I0717 00:45:13.414900     784 round_trippers.go:580]     Audit-Id: 2a77da4c-22de-43a8-bf7d-37b54eba7aa3
	I0717 00:45:13.414900     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:13.414900     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:13.414900     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:13.414900     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:13.415056     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:13 GMT
	I0717 00:45:13.415397     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:13.416213     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:13.416213     784 round_trippers.go:469] Request Headers:
	I0717 00:45:13.416301     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:13.416301     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:13.421615     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:13.422151     784 round_trippers.go:577] Response Headers:
	I0717 00:45:13.422185     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:13.422185     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:13.422185     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:13.422227     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:13 GMT
	I0717 00:45:13.422227     784 round_trippers.go:580]     Audit-Id: 97c58d06-c1b6-43bc-a205-c9e3692fa345
	I0717 00:45:13.422227     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:13.422780     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:13.908520     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:13.908648     784 round_trippers.go:469] Request Headers:
	I0717 00:45:13.908648     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:13.908648     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:13.917020     784 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:45:13.917020     784 round_trippers.go:577] Response Headers:
	I0717 00:45:13.917020     784 round_trippers.go:580]     Audit-Id: 064f3e74-71c5-40fe-aa13-6d9053655fe2
	I0717 00:45:13.917020     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:13.917020     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:13.917020     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:13.917020     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:13.917020     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:13 GMT
	I0717 00:45:13.917020     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:13.917651     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:13.918183     784 round_trippers.go:469] Request Headers:
	I0717 00:45:13.918183     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:13.918183     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:13.923760     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:13.923760     784 round_trippers.go:577] Response Headers:
	I0717 00:45:13.923760     784 round_trippers.go:580]     Audit-Id: acf7862b-0dcc-4d82-bdc8-235d99463f40
	I0717 00:45:13.923760     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:13.923760     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:13.923760     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:13.923760     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:13.923760     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:13 GMT
	I0717 00:45:13.924936     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:14.410821     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:14.410914     784 round_trippers.go:469] Request Headers:
	I0717 00:45:14.410914     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:14.410914     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:14.417976     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:14.417976     784 round_trippers.go:577] Response Headers:
	I0717 00:45:14.417976     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:14.417976     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:14.417976     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:14.417976     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:14 GMT
	I0717 00:45:14.417976     784 round_trippers.go:580]     Audit-Id: c28cb210-678a-41c7-ad05-66deabe0d48b
	I0717 00:45:14.417976     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:14.419217     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:14.421108     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:14.421108     784 round_trippers.go:469] Request Headers:
	I0717 00:45:14.421108     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:14.421108     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:14.427027     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:14.427559     784 round_trippers.go:577] Response Headers:
	I0717 00:45:14.427559     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:14.427620     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:14.427620     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:14.427620     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:14 GMT
	I0717 00:45:14.427620     784 round_trippers.go:580]     Audit-Id: ce987a72-61cc-4002-8039-74f543f25a4c
	I0717 00:45:14.427620     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:14.427832     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:14.427832     784 pod_ready.go:102] pod "kube-apiserver-functional-965000" in "kube-system" namespace has status "Ready":"False"
	I0717 00:45:14.902654     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:14.902775     784 round_trippers.go:469] Request Headers:
	I0717 00:45:14.902775     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:14.902775     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:14.909647     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:14.909893     784 round_trippers.go:577] Response Headers:
	I0717 00:45:14.909941     784 round_trippers.go:580]     Audit-Id: 7f3206e8-eb63-4852-9ed8-9cc092e6b1e4
	I0717 00:45:14.909941     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:14.910005     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:14.910041     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:14.910041     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:14.910041     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:14 GMT
	I0717 00:45:14.910767     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:14.911948     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:14.912006     784 round_trippers.go:469] Request Headers:
	I0717 00:45:14.912006     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:14.912006     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:14.917979     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:14.917979     784 round_trippers.go:577] Response Headers:
	I0717 00:45:14.917979     784 round_trippers.go:580]     Audit-Id: f9f73df9-69e9-4707-be24-d9be57f9f570
	I0717 00:45:14.917979     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:14.917979     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:14.917979     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:14.917979     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:14.917979     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:14 GMT
	I0717 00:45:14.919088     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:15.410295     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:15.410410     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.410410     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.410410     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.417919     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:15.417919     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.417919     784 round_trippers.go:580]     Audit-Id: e4666379-506c-4884-9e11-c8537be93b4f
	I0717 00:45:15.417919     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.417919     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.417919     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.417919     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.417919     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.417919     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"423","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8986 chars]
	I0717 00:45:15.419001     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:15.419001     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.419001     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.419001     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.423851     784 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:45:15.423851     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.423851     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.423851     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.423851     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.423851     784 round_trippers.go:580]     Audit-Id: 8431b29b-7333-4267-8640-171220607fbc
	I0717 00:45:15.423851     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.423851     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.423851     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:15.903520     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000
	I0717 00:45:15.903719     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.903719     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.903719     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.909526     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:15.909526     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.909526     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.909526     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.909526     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.909526     784 round_trippers.go:580]     Audit-Id: 8fd75e9b-f04d-4f86-8268-3aff9a59cb93
	I0717 00:45:15.909526     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.909526     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.909526     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-965000","namespace":"kube-system","uid":"6064fa8b-0e59-4c2e-bf51-9f444e7b247e","resourceVersion":"491","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.mirror":"1f96f42d4c4d9ac651471a09c85d1277","kubernetes.io/config.seen":"2024-07-17T00:43:57.645337859Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8742 chars]
	I0717 00:45:15.910715     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:15.910715     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.910715     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.910828     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.917314     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:15.917392     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.917437     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.917437     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.917437     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.917437     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.917437     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.917437     784 round_trippers.go:580]     Audit-Id: 9bb5a2e8-e7b1-4138-8dbd-3b82dd34dd93
	I0717 00:45:15.917437     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:15.918226     784 pod_ready.go:92] pod "kube-apiserver-functional-965000" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:15.918272     784 pod_ready.go:81] duration metric: took 10.0177176s for pod "kube-apiserver-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.918314     784 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.918426     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-965000
	I0717 00:45:15.918493     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.918513     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.918513     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.924909     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:15.925447     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.925447     784 round_trippers.go:580]     Audit-Id: eb2f0ed6-6e3a-451d-ae80-e2f6d0503224
	I0717 00:45:15.925447     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.925447     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.925447     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.925447     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.925447     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.925583     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-965000","namespace":"kube-system","uid":"3e64de1f-68d7-4046-8c53-84e7d439d726","resourceVersion":"487","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a70bc62f5b603e4a3c252518143d0b34","kubernetes.io/config.mirror":"a70bc62f5b603e4a3c252518143d0b34","kubernetes.io/config.seen":"2024-07-17T00:43:57.645340359Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8315 chars]
	I0717 00:45:15.926245     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:15.926245     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.926245     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.926245     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.932271     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:15.932373     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.932373     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.932373     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.932431     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.932431     784 round_trippers.go:580]     Audit-Id: b4f708f9-78b0-4e5d-9df5-9e2e33495ab0
	I0717 00:45:15.932431     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.932431     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.932655     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:15.932881     784 pod_ready.go:92] pod "kube-controller-manager-functional-965000" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:15.932881     784 pod_ready.go:81] duration metric: took 14.5669ms for pod "kube-controller-manager-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.932881     784 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jsqf2" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.932881     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-proxy-jsqf2
	I0717 00:45:15.932881     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.932881     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.932881     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.942018     784 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:45:15.942018     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.942018     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.942018     784 round_trippers.go:580]     Audit-Id: 5f087fb1-aeee-4db6-882f-1c7cae744010
	I0717 00:45:15.942018     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.942018     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.942018     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.942018     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.943738     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jsqf2","generateName":"kube-proxy-","namespace":"kube-system","uid":"d6190a0c-9716-4686-85dc-6033bd40a184","resourceVersion":"433","creationTimestamp":"2024-07-17T00:44:11Z","labels":{"controller-revision-hash":"669fc44fbc","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"400ce8d8-bd5f-422a-a954-503a42d55424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:44:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"400ce8d8-bd5f-422a-a954-503a42d55424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6030 chars]
	I0717 00:45:15.944503     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:15.944503     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.944503     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.944503     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.949612     784 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:45:15.949612     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.949671     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.949671     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.949671     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.949671     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.949671     784 round_trippers.go:580]     Audit-Id: 1cdf2103-3464-46ec-a904-a729136230c4
	I0717 00:45:15.949709     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.949916     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:15.949916     784 pod_ready.go:92] pod "kube-proxy-jsqf2" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:15.949916     784 pod_ready.go:81] duration metric: took 17.0347ms for pod "kube-proxy-jsqf2" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.949916     784 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:15.949916     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:15.949916     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.949916     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.949916     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.956954     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:15.956954     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.956954     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.956954     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.956954     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.956954     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.956954     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.956954     784 round_trippers.go:580]     Audit-Id: b33795a3-856b-403a-ae2c-a9e41f99de50
	I0717 00:45:15.957610     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:15.957610     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:15.957610     784 round_trippers.go:469] Request Headers:
	I0717 00:45:15.957610     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:15.957610     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:15.960702     784 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 00:45:15.960702     784 round_trippers.go:577] Response Headers:
	I0717 00:45:15.960702     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:15.960702     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:15.960702     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:15.960702     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:15 GMT
	I0717 00:45:15.960702     784 round_trippers.go:580]     Audit-Id: 53d4c676-84fb-47fa-8ef8-36477820f7c9
	I0717 00:45:15.960702     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:15.962458     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:16.465603     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:16.465859     784 round_trippers.go:469] Request Headers:
	I0717 00:45:16.465859     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:16.465859     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:16.473013     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:16.473041     784 round_trippers.go:577] Response Headers:
	I0717 00:45:16.473041     784 round_trippers.go:580]     Audit-Id: 282abc0f-0d60-4d72-89a1-59b19bccf9f6
	I0717 00:45:16.473102     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:16.473102     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:16.473102     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:16.473160     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:16.473228     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:16 GMT
	I0717 00:45:16.473852     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:16.474436     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:16.474436     784 round_trippers.go:469] Request Headers:
	I0717 00:45:16.474436     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:16.474436     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:16.481515     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:16.481550     784 round_trippers.go:577] Response Headers:
	I0717 00:45:16.481550     784 round_trippers.go:580]     Audit-Id: 5bd556dd-5c54-4f56-8d30-1eaec290a06d
	I0717 00:45:16.481550     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:16.481550     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:16.481550     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:16.481550     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:16.481550     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:16 GMT
	I0717 00:45:16.481550     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:16.972042     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:16.972042     784 round_trippers.go:469] Request Headers:
	I0717 00:45:16.972042     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:16.972042     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:16.981607     784 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0717 00:45:16.981607     784 round_trippers.go:577] Response Headers:
	I0717 00:45:16.981607     784 round_trippers.go:580]     Audit-Id: c85f4c76-0222-48e3-8eca-ba42bf080553
	I0717 00:45:16.981607     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:16.981607     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:16.981607     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:16.981607     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:16.981607     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:16 GMT
	I0717 00:45:16.982234     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:16.983501     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:16.983501     784 round_trippers.go:469] Request Headers:
	I0717 00:45:16.983501     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:16.983501     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:16.991446     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:16.991498     784 round_trippers.go:577] Response Headers:
	I0717 00:45:16.991498     784 round_trippers.go:580]     Audit-Id: 1eb06c0c-6faa-41c0-9d6a-2d6573c7eaf8
	I0717 00:45:16.991668     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:16.991668     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:16.991668     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:16.991668     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:16.991668     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:16 GMT
	I0717 00:45:16.992417     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:17.450992     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:17.451087     784 round_trippers.go:469] Request Headers:
	I0717 00:45:17.451087     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:17.451087     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:17.458242     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:17.458242     784 round_trippers.go:577] Response Headers:
	I0717 00:45:17.458242     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:17.458242     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:17 GMT
	I0717 00:45:17.458242     784 round_trippers.go:580]     Audit-Id: 45b2489a-763a-46e6-a005-cee36cf85897
	I0717 00:45:17.458242     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:17.458242     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:17.458242     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:17.458242     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:17.459514     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:17.459514     784 round_trippers.go:469] Request Headers:
	I0717 00:45:17.459514     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:17.459514     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:17.466304     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:17.466304     784 round_trippers.go:577] Response Headers:
	I0717 00:45:17.466304     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:17.466304     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:17.466304     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:17 GMT
	I0717 00:45:17.466304     784 round_trippers.go:580]     Audit-Id: c548e2ac-c8d8-4eb8-a3c4-51112a1f88ff
	I0717 00:45:17.466304     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:17.466304     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:17.467053     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:17.956023     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:17.956251     784 round_trippers.go:469] Request Headers:
	I0717 00:45:17.956251     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:17.956251     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:17.964762     784 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:45:17.964762     784 round_trippers.go:577] Response Headers:
	I0717 00:45:17.964762     784 round_trippers.go:580]     Audit-Id: 6a325f4a-33b2-4501-a15e-8ce0bde7d37c
	I0717 00:45:17.964762     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:17.964762     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:17.964762     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:17.964762     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:17.964762     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:17 GMT
	I0717 00:45:17.965327     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:17.966060     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:17.966097     784 round_trippers.go:469] Request Headers:
	I0717 00:45:17.966097     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:17.966134     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:17.974166     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:17.974166     784 round_trippers.go:577] Response Headers:
	I0717 00:45:17.974166     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:17.974166     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:17.974166     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:17.974166     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:17 GMT
	I0717 00:45:17.974166     784 round_trippers.go:580]     Audit-Id: 14f59ac6-3576-4004-aaff-c80eb45ee374
	I0717 00:45:17.974166     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:17.974764     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:17.974872     784 pod_ready.go:102] pod "kube-scheduler-functional-965000" in "kube-system" namespace has status "Ready":"False"
	I0717 00:45:18.454622     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:18.454701     784 round_trippers.go:469] Request Headers:
	I0717 00:45:18.454701     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:18.454701     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:18.462921     784 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0717 00:45:18.462921     784 round_trippers.go:577] Response Headers:
	I0717 00:45:18.462921     784 round_trippers.go:580]     Audit-Id: 18a3b3a8-0890-46ca-83f3-d86bcd0be581
	I0717 00:45:18.462921     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:18.462921     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:18.462921     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:18.462921     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:18.462921     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:18 GMT
	I0717 00:45:18.463625     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:18.464185     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:18.464301     784 round_trippers.go:469] Request Headers:
	I0717 00:45:18.464301     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:18.464301     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:18.471632     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:18.471692     784 round_trippers.go:577] Response Headers:
	I0717 00:45:18.471752     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:18.471752     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:18 GMT
	I0717 00:45:18.471752     784 round_trippers.go:580]     Audit-Id: 13ba49ac-c801-46c5-bb8b-0cb7e96b1de0
	I0717 00:45:18.471806     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:18.471806     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:18.471874     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:18.472161     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:18.963980     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:18.964240     784 round_trippers.go:469] Request Headers:
	I0717 00:45:18.964240     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:18.964240     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:18.969387     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:18.969416     784 round_trippers.go:577] Response Headers:
	I0717 00:45:18.969416     784 round_trippers.go:580]     Audit-Id: fb73d09b-5fe1-4026-a834-074b7d8927c8
	I0717 00:45:18.969416     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:18.969416     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:18.969416     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:18.969416     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:18.969416     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:18 GMT
	I0717 00:45:18.969416     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:18.970299     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:18.970299     784 round_trippers.go:469] Request Headers:
	I0717 00:45:18.970299     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:18.970299     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:18.976397     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:18.976397     784 round_trippers.go:577] Response Headers:
	I0717 00:45:18.976397     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:18.976397     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:18.976397     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:18.976397     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:18.976397     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:18 GMT
	I0717 00:45:18.976397     784 round_trippers.go:580]     Audit-Id: f6edcf3f-ca30-4e76-96de-b6c59a34ceb2
	I0717 00:45:18.976397     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:19.463471     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:19.463471     784 round_trippers.go:469] Request Headers:
	I0717 00:45:19.463471     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:19.463471     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:19.469005     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:19.469138     784 round_trippers.go:577] Response Headers:
	I0717 00:45:19.469138     784 round_trippers.go:580]     Audit-Id: 74cf526e-a8d3-40d5-8589-ae2dcc86d5a1
	I0717 00:45:19.469138     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:19.469138     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:19.469138     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:19.469138     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:19.469138     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:19 GMT
	I0717 00:45:19.469472     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:19.470383     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:19.470383     784 round_trippers.go:469] Request Headers:
	I0717 00:45:19.470383     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:19.470451     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:19.474532     784 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 00:45:19.474532     784 round_trippers.go:577] Response Headers:
	I0717 00:45:19.474532     784 round_trippers.go:580]     Audit-Id: e498c812-bec5-4926-aab3-91ed398d4afd
	I0717 00:45:19.474532     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:19.474532     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:19.474532     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:19.474532     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:19.474532     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:19 GMT
	I0717 00:45:19.475066     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:19.955521     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:19.955521     784 round_trippers.go:469] Request Headers:
	I0717 00:45:19.955647     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:19.955647     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:19.961424     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:19.961603     784 round_trippers.go:577] Response Headers:
	I0717 00:45:19.961603     784 round_trippers.go:580]     Audit-Id: 9bcaffaf-37cf-42d3-bf27-b3fe2d2ca0e0
	I0717 00:45:19.961603     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:19.961603     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:19.961603     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:19.961603     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:19.961603     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:19 GMT
	I0717 00:45:19.961916     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"450","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0717 00:45:19.962788     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:19.962829     784 round_trippers.go:469] Request Headers:
	I0717 00:45:19.962829     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:19.962880     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:19.970784     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:19.970784     784 round_trippers.go:577] Response Headers:
	I0717 00:45:19.970784     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:19 GMT
	I0717 00:45:19.970784     784 round_trippers.go:580]     Audit-Id: 1b736377-770b-4e0d-b882-bb03c185af52
	I0717 00:45:19.970784     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:19.970784     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:19.970784     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:19.970784     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:19.970784     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:20.462225     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000
	I0717 00:45:20.462507     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.462507     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.462507     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.469533     784 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0717 00:45:20.469533     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.469533     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.469533     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.469533     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.469533     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.469533     784 round_trippers.go:580]     Audit-Id: 526b262b-3001-45e3-8a96-5221284a767a
	I0717 00:45:20.469611     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.469809     784 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-965000","namespace":"kube-system","uid":"5132531f-55d1-4d5e-93c4-1bee32490e40","resourceVersion":"504","creationTimestamp":"2024-07-17T00:43:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.mirror":"ea288c3d0d4f9ec59e0cc124b9c0c2c4","kubernetes.io/config.seen":"2024-07-17T00:43:57.645342459Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:43:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0717 00:45:20.470135     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes/functional-965000
	I0717 00:45:20.470135     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.470135     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.470666     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.477935     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:20.477935     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.477935     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.477935     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.477935     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.477935     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.477935     784 round_trippers.go:580]     Audit-Id: f96e2ab0-89dd-4e12-bfba-64a9ae4943dc
	I0717 00:45:20.477935     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.478489     784 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-07-17T00:43:54Z","fieldsType":"FieldsV1", [truncated 4855 chars]
	I0717 00:45:20.478880     784 pod_ready.go:92] pod "kube-scheduler-functional-965000" in "kube-system" namespace has status "Ready":"True"
	I0717 00:45:20.478880     784 pod_ready.go:81] duration metric: took 4.528927s for pod "kube-scheduler-functional-965000" in "kube-system" namespace to be "Ready" ...
	I0717 00:45:20.478880     784 pod_ready.go:38] duration metric: took 15.0732351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 00:45:20.478964     784 api_server.go:52] waiting for apiserver process to appear ...
	I0717 00:45:20.489603     784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 00:45:20.517038     784 command_runner.go:130] > 6422
	I0717 00:45:20.517038     784 api_server.go:72] duration metric: took 25.3090233s to wait for apiserver process to appear ...
	I0717 00:45:20.517038     784 api_server.go:88] waiting for apiserver healthz status ...
	I0717 00:45:20.517038     784 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63088/healthz ...
	I0717 00:45:20.529448     784 api_server.go:279] https://127.0.0.1:63088/healthz returned 200:
	ok
	I0717 00:45:20.529602     784 round_trippers.go:463] GET https://127.0.0.1:63088/version
	I0717 00:45:20.529602     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.529659     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.529659     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.532534     784 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 00:45:20.532534     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.532534     784 round_trippers.go:580]     Content-Length: 263
	I0717 00:45:20.532534     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.532534     784 round_trippers.go:580]     Audit-Id: 48e691eb-2164-4ac2-9a1d-ce92840de10a
	I0717 00:45:20.532534     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.532534     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.532534     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.532534     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.532534     784 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.2",
	  "gitCommit": "39683505b630ff2121012f3c5b16215a1449d5ed",
	  "gitTreeState": "clean",
	  "buildDate": "2024-06-11T20:21:00Z",
	  "goVersion": "go1.22.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 00:45:20.532534     784 api_server.go:141] control plane version: v1.30.2
	I0717 00:45:20.532534     784 api_server.go:131] duration metric: took 15.4958ms to wait for apiserver health ...
	I0717 00:45:20.532534     784 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 00:45:20.533063     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods
	I0717 00:45:20.533063     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.533156     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.533156     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.541051     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:20.541051     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.541051     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.541051     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.541051     784 round_trippers.go:580]     Audit-Id: bb87d2d2-a439-4b1d-95f0-1c177942808c
	I0717 00:45:20.541051     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.541051     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.541051     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.542914     784 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wz2jh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"68da0227-f0f9-4feb-a17c-6282b313b353","resourceVersion":"488","creationTimestamp":"2024-07-17T00:44:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b84c9d6-b98a-4a91-b9d9-06ef869db86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:44:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b84c9d6-b98a-4a91-b9d9-06ef869db86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51751 chars]
	I0717 00:45:20.546689     784 system_pods.go:59] 7 kube-system pods found
	I0717 00:45:20.546689     784 system_pods.go:61] "coredns-7db6d8ff4d-wz2jh" [68da0227-f0f9-4feb-a17c-6282b313b353] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "etcd-functional-965000" [6771d196-310e-4275-ab61-1f295b066578] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "kube-apiserver-functional-965000" [6064fa8b-0e59-4c2e-bf51-9f444e7b247e] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "kube-controller-manager-functional-965000" [3e64de1f-68d7-4046-8c53-84e7d439d726] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "kube-proxy-jsqf2" [d6190a0c-9716-4686-85dc-6033bd40a184] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "kube-scheduler-functional-965000" [5132531f-55d1-4d5e-93c4-1bee32490e40] Running
	I0717 00:45:20.546689     784 system_pods.go:61] "storage-provisioner" [c0f5142c-ffc2-469e-b7eb-75daaf5247cc] Running
	I0717 00:45:20.546689     784 system_pods.go:74] duration metric: took 14.1547ms to wait for pod list to return data ...
	I0717 00:45:20.546689     784 default_sa.go:34] waiting for default service account to be created ...
	I0717 00:45:20.546689     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/default/serviceaccounts
	I0717 00:45:20.546689     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.546689     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.546689     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.547445     784 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0717 00:45:20.551297     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.551405     784 round_trippers.go:580]     Audit-Id: 748ae05e-c4bb-412a-bf06-e73723d29c9d
	I0717 00:45:20.551470     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.551470     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.551533     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.551533     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.551533     784 round_trippers.go:580]     Content-Length: 261
	I0717 00:45:20.551592     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.551592     784 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"555489ae-f124-4972-9352-a0b8bfc9ce71","resourceVersion":"310","creationTimestamp":"2024-07-17T00:44:11Z"}}]}
	I0717 00:45:20.552006     784 default_sa.go:45] found service account: "default"
	I0717 00:45:20.552006     784 default_sa.go:55] duration metric: took 5.3172ms for default service account to be created ...
	I0717 00:45:20.552006     784 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 00:45:20.552097     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/namespaces/kube-system/pods
	I0717 00:45:20.552097     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.554155     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.554155     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.560048     784 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0717 00:45:20.560112     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.560112     784 round_trippers.go:580]     Audit-Id: e0926b92-5cca-402a-a05c-a4fc78b8fa3a
	I0717 00:45:20.560112     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.560166     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.560166     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.560166     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.560208     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.560816     784 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-wz2jh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"68da0227-f0f9-4feb-a17c-6282b313b353","resourceVersion":"488","creationTimestamp":"2024-07-17T00:44:11Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b84c9d6-b98a-4a91-b9d9-06ef869db86f","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-17T00:44:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b84c9d6-b98a-4a91-b9d9-06ef869db86f\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51751 chars]
	I0717 00:45:20.563741     784 system_pods.go:86] 7 kube-system pods found
	I0717 00:45:20.563741     784 system_pods.go:89] "coredns-7db6d8ff4d-wz2jh" [68da0227-f0f9-4feb-a17c-6282b313b353] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "etcd-functional-965000" [6771d196-310e-4275-ab61-1f295b066578] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "kube-apiserver-functional-965000" [6064fa8b-0e59-4c2e-bf51-9f444e7b247e] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "kube-controller-manager-functional-965000" [3e64de1f-68d7-4046-8c53-84e7d439d726] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "kube-proxy-jsqf2" [d6190a0c-9716-4686-85dc-6033bd40a184] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "kube-scheduler-functional-965000" [5132531f-55d1-4d5e-93c4-1bee32490e40] Running
	I0717 00:45:20.563741     784 system_pods.go:89] "storage-provisioner" [c0f5142c-ffc2-469e-b7eb-75daaf5247cc] Running
	I0717 00:45:20.563741     784 system_pods.go:126] duration metric: took 11.7345ms to wait for k8s-apps to be running ...
	I0717 00:45:20.563741     784 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 00:45:20.573462     784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 00:45:20.601917     784 system_svc.go:56] duration metric: took 38.1754ms WaitForService to wait for kubelet
	I0717 00:45:20.601917     784 kubeadm.go:582] duration metric: took 25.3939009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 00:45:20.601917     784 node_conditions.go:102] verifying NodePressure condition ...
	I0717 00:45:20.602457     784 round_trippers.go:463] GET https://127.0.0.1:63088/api/v1/nodes
	I0717 00:45:20.602457     784 round_trippers.go:469] Request Headers:
	I0717 00:45:20.602457     784 round_trippers.go:473]     Accept: application/json, */*
	I0717 00:45:20.602457     784 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0717 00:45:20.610064     784 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 00:45:20.610064     784 round_trippers.go:577] Response Headers:
	I0717 00:45:20.610064     784 round_trippers.go:580]     Audit-Id: 396cea38-a7db-43e4-9191-abd4e6cb8965
	I0717 00:45:20.610064     784 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 00:45:20.610064     784 round_trippers.go:580]     Content-Type: application/json
	I0717 00:45:20.610064     784 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 1a4985d2-cd76-4257-80f9-dfd43c2114d7
	I0717 00:45:20.610064     784 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a8270ea2-c1f0-4bbd-9332-496913fbb0f2
	I0717 00:45:20.610064     784 round_trippers.go:580]     Date: Wed, 17 Jul 2024 00:45:20 GMT
	I0717 00:45:20.617010     784 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"504"},"items":[{"metadata":{"name":"functional-965000","uid":"a6f90c97-6dd5-44e8-bd73-142293ce6d21","resourceVersion":"397","creationTimestamp":"2024-07-17T00:43:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-965000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"3cfbbb17fd76400a5ee2ea427db7148a0ef7c185","minikube.k8s.io/name":"functional-965000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_17T00_43_58_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4908 chars]
	I0717 00:45:20.619800     784 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0717 00:45:20.619853     784 node_conditions.go:123] node cpu capacity is 16
	I0717 00:45:20.619952     784 node_conditions.go:105] duration metric: took 18.0355ms to run NodePressure ...
	I0717 00:45:20.619998     784 start.go:241] waiting for startup goroutines ...
	I0717 00:45:20.619998     784 start.go:246] waiting for cluster config update ...
	I0717 00:45:20.620105     784 start.go:255] writing updated cluster config ...
	I0717 00:45:20.634659     784 ssh_runner.go:195] Run: rm -f paused
	I0717 00:45:20.840840     784 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0717 00:45:20.844262     784 out.go:177] * Done! kubectl is now configured to use "functional-965000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 17 00:44:52 functional-965000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jul 17 00:44:52 functional-965000 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Jul 17 00:44:52 functional-965000 systemd[1]: cri-docker.service: Deactivated successfully.
	Jul 17 00:44:52 functional-965000 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Jul 17 00:44:52 functional-965000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Start docker client with request timeout 0s"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Loaded network plugin cni"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Docker cri networking managed by network plugin cni"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Start cri-dockerd grpc backend"
	Jul 17 00:44:52 functional-965000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jul 17 00:44:52 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-wz2jh_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"aa3c2eb70eeee9d3da87b423547c5c04c5d9b3750a7c77ff872633aae9707202\""
	Jul 17 00:44:58 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f1f821190ea5ce5c89f65af6c1fda3f8aaf153788970ca50815402fb3884293b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4bc56fd930e05abbd1d7a2b81caa46b007abe183b8fccb7ccf63cba449c4b6c0/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9632354ef2694df384f0c113792594f8e8999a77f2f326860fa2217d470cf7e2/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a1c791bf5686721bd09a94c33bfc08845e758f749e264397f6c1044d9ae197a3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a2150358c615f55cd01dd7415adc02bd8eb4ae92032ac43bec8903c55ba1cb5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ba77f2eed4cd1a77040d12792c9da337c04e155d2699ed7884832b6cfae1d642/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 00:44:59 functional-965000 dockerd[4906]: time="2024-07-17T00:44:59.922471375Z" level=info msg="ignoring event" container=c9a465ae07a22eda472a8ccb6d5bd31fcc19c18dfe838923c8d905be37d597e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 00:44:59 functional-965000 cri-dockerd[5284]: time="2024-07-17T00:44:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e4463c486684eeef8766d874138e2c481efdcc318caf530412a62c09425dbf3c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bbde407c88413       6e38f40d628db       29 seconds ago       Running             storage-provisioner       2                   f1f821190ea5c       storage-provisioner
	d59e6a3d917d3       cbb01a7bd410d       45 seconds ago       Running             coredns                   1                   e4463c486684e       coredns-7db6d8ff4d-wz2jh
	09cb8ab2213fe       56ce0fd9fb532       46 seconds ago       Running             kube-apiserver            1                   ba77f2eed4cd1       kube-apiserver-functional-965000
	c259a18fc1c06       7820c83aa1394       46 seconds ago       Running             kube-scheduler            1                   1a2150358c615       kube-scheduler-functional-965000
	a2e9c719ce986       e874818b3caac       46 seconds ago       Running             kube-controller-manager   1                   a1c791bf56867       kube-controller-manager-functional-965000
	d134d759dcd20       3861cfcd7c04c       46 seconds ago       Running             etcd                      1                   9632354ef2694       etcd-functional-965000
	4f27496d036cb       53c535741fb44       46 seconds ago       Running             kube-proxy                1                   4bc56fd930e05       kube-proxy-jsqf2
	c9a465ae07a22       6e38f40d628db       47 seconds ago       Exited              storage-provisioner       1                   f1f821190ea5c       storage-provisioner
	db35105ee41b4       cbb01a7bd410d       About a minute ago   Exited              coredns                   0                   aa3c2eb70eeee       coredns-7db6d8ff4d-wz2jh
	0d108b24c5488       53c535741fb44       About a minute ago   Exited              kube-proxy                0                   b216288f1b0c4       kube-proxy-jsqf2
	ece963252e123       3861cfcd7c04c       About a minute ago   Exited              etcd                      0                   5b2b72a4c11cf       etcd-functional-965000
	527b5afbe5ab5       e874818b3caac       About a minute ago   Exited              kube-controller-manager   0                   71354c2b2357c       kube-controller-manager-functional-965000
	57cbcbc4f0c9e       56ce0fd9fb532       About a minute ago   Exited              kube-apiserver            0                   4419877b74933       kube-apiserver-functional-965000
	7b0296e0f6429       7820c83aa1394       About a minute ago   Exited              kube-scheduler            0                   cb3d68f8d90c8       kube-scheduler-functional-965000
	
	
	==> coredns [d59e6a3d917d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43580 - 65438 "HINFO IN 7915414069527512292.3000944145248426260. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045721743s
	
	
	==> coredns [db35105ee41b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-965000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-965000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=functional-965000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T00_43_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 00:43:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-965000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 00:45:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 00:45:29 +0000   Wed, 17 Jul 2024 00:43:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 00:45:29 +0000   Wed, 17 Jul 2024 00:43:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 00:45:29 +0000   Wed, 17 Jul 2024 00:43:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 00:45:29 +0000   Wed, 17 Jul 2024 00:43:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-965000
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 3963b2315b0341679d965d0c930363e3
	  System UUID:                3963b2315b0341679d965d0c930363e3
	  Boot ID:                    c8c682c7-038f-4949-bfeb-6c51c261a4de
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-wz2jh                     100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     94s
	  kube-system                 etcd-functional-965000                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kube-apiserver-functional-965000             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-controller-manager-functional-965000    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-jsqf2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-functional-965000             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%!)(MISSING)   0 (0%!)(MISSING)
	  memory             170Mi (0%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 90s   kube-proxy       
	  Normal  Starting                 37s   kube-proxy       
	  Normal  Starting                 108s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s  kubelet          Node functional-965000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s  kubelet          Node functional-965000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s  kubelet          Node functional-965000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             108s  kubelet          Node functional-965000 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  107s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                107s  kubelet          Node functional-965000 status is now: NodeReady
	  Normal  RegisteredNode           95s   node-controller  Node functional-965000 event: Registered Node functional-965000 in Controller
	  Normal  RegisteredNode           26s   node-controller  Node functional-965000 event: Registered Node functional-965000 in Controller
	
	
	==> dmesg <==
	[  +0.001058] FS-Cache: O-cookie c=00000006 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001229] FS-Cache: O-cookie d=00000000d644147d{9P.session} n=000000006201c53c
	[  +0.001174] FS-Cache: O-key=[10] '34323934393337343735'
	[  +0.000817] FS-Cache: N-cookie c=00000007 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001103] FS-Cache: N-cookie d=00000000d644147d{9P.session} n=0000000023399480
	[  +0.001450] FS-Cache: N-key=[10] '34323934393337343735'
	[  +0.941940] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000005]  failed 2
	[  +0.058380] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.579542] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.324531] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002125] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.003033] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.003674] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.006706] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001818] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004345] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002202] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.011260] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.193778] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.644798] netlink: 'init': attribute type 4 has an invalid length.
	
	
	==> etcd [d134d759dcd2] <==
	{"level":"info","ts":"2024-07-17T00:45:01.411206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-17T00:45:01.411257Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-17T00:45:01.411307Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-17T00:45:03.112347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-17T00:45:03.112522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-17T00:45:03.112588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-17T00:45:03.112606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-07-17T00:45:03.112668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-17T00:45:03.112702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-07-17T00:45:03.112711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-07-17T00:45:03.122245Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-965000 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-17T00:45:03.12234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:45:03.122602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-17T00:45:03.124837Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-17T00:45:03.124937Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-17T00:45:03.125457Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-17T00:45:03.196527Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-07-17T00:45:05.713505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.911219ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128030573858102144 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/etcd-functional-965000.17e2d96f194e88b6\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-functional-965000.17e2d96f194e88b6\" value_size:762 lease:8128030573858102140 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-17T00:45:05.713887Z","caller":"traceutil/trace.go:171","msg":"trace[961110191] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"208.03314ms","start":"2024-07-17T00:45:05.505828Z","end":"2024-07-17T00:45:05.713862Z","steps":["trace[961110191] 'process raft request'  (duration: 100.316451ms)","trace[961110191] 'compare'  (duration: 106.808611ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:45:05.714044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.211052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:420"}
	{"level":"info","ts":"2024-07-17T00:45:05.714082Z","caller":"traceutil/trace.go:171","msg":"trace[1660990464] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:420; }","duration":"201.329163ms","start":"2024-07-17T00:45:05.512743Z","end":"2024-07-17T00:45:05.714072Z","steps":["trace[1660990464] 'agreement among raft nodes before linearized reading'  (duration: 201.18335ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:45:05.714244Z","caller":"traceutil/trace.go:171","msg":"trace[160444155] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"205.660736ms","start":"2024-07-17T00:45:05.508562Z","end":"2024-07-17T00:45:05.714223Z","steps":["trace[160444155] 'process raft request'  (duration: 205.184094ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:45:05.71389Z","caller":"traceutil/trace.go:171","msg":"trace[2070787215] linearizableReadLoop","detail":"{readStateIndex:439; appliedIndex:437; }","duration":"201.059639ms","start":"2024-07-17T00:45:05.51282Z","end":"2024-07-17T00:45:05.713879Z","steps":["trace[2070787215] 'read index received'  (duration: 93.272144ms)","trace[2070787215] 'applied index is now lower than readState.Index'  (duration: 107.786195ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:45:05.714598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.090684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-wz2jh\" ","response":"range_response_count:1 size:4790"}
	{"level":"info","ts":"2024-07-17T00:45:05.714636Z","caller":"traceutil/trace.go:171","msg":"trace[436594993] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-wz2jh; range_end:; response_count:1; response_revision:420; }","duration":"118.15899ms","start":"2024-07-17T00:45:05.596465Z","end":"2024-07-17T00:45:05.714624Z","steps":["trace[436594993] 'agreement among raft nodes before linearized reading'  (duration: 118.094185ms)"],"step_count":1}
	
	
	==> etcd [ece963252e12] <==
	{"level":"info","ts":"2024-07-17T00:43:51.630271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:43:51.630318Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-17T00:43:51.631532Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-07-17T00:44:12.012596Z","caller":"traceutil/trace.go:171","msg":"trace[597909839] linearizableReadLoop","detail":"{readStateIndex:349; appliedIndex:347; }","duration":"100.300381ms","start":"2024-07-17T00:44:11.91227Z","end":"2024-07-17T00:44:12.012571Z","steps":["trace[597909839] 'read index received'  (duration: 10.77882ms)","trace[597909839] 'applied index is now lower than readState.Index'  (duration: 89.520361ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:44:12.012885Z","caller":"traceutil/trace.go:171","msg":"trace[252855159] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"103.689795ms","start":"2024-07-17T00:44:11.909064Z","end":"2024-07-17T00:44:12.012754Z","steps":["trace[252855159] 'process raft request'  (duration: 99.992143ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-17T00:44:12.013248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.87445ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7db6d8ff4d-qgbxl\" ","response":"range_response_count:1 size:3575"}
	{"level":"info","ts":"2024-07-17T00:44:12.0134Z","caller":"traceutil/trace.go:171","msg":"trace[177806027] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"102.314127ms","start":"2024-07-17T00:44:11.911063Z","end":"2024-07-17T00:44:12.013377Z","steps":["trace[177806027] 'process raft request'  (duration: 101.422517ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:44:12.013441Z","caller":"traceutil/trace.go:171","msg":"trace[1077241050] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7db6d8ff4d-qgbxl; range_end:; response_count:1; response_revision:339; }","duration":"101.369311ms","start":"2024-07-17T00:44:11.912056Z","end":"2024-07-17T00:44:12.013425Z","steps":["trace[1077241050] 'agreement among raft nodes before linearized reading'  (duration: 101.022069ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:44:12.409644Z","caller":"traceutil/trace.go:171","msg":"trace[860599798] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"102.057995ms","start":"2024-07-17T00:44:12.307562Z","end":"2024-07-17T00:44:12.40962Z","steps":["trace[860599798] 'process raft request'  (duration: 101.986686ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:44:12.409784Z","caller":"traceutil/trace.go:171","msg":"trace[1698557814] linearizableReadLoop","detail":"{readStateIndex:354; appliedIndex:353; }","duration":"103.045717ms","start":"2024-07-17T00:44:12.306718Z","end":"2024-07-17T00:44:12.409763Z","steps":["trace[1698557814] 'read index received'  (duration: 15.531902ms)","trace[1698557814] 'applied index is now lower than readState.Index'  (duration: 87.512014ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:44:12.409791Z","caller":"traceutil/trace.go:171","msg":"trace[4352928] transaction","detail":"{read_only:false; response_revision:344; number_of_response:1; }","duration":"111.180113ms","start":"2024-07-17T00:44:12.298592Z","end":"2024-07-17T00:44:12.409772Z","steps":["trace[4352928] 'process raft request'  (duration: 23.448771ms)","trace[4352928] 'compare'  (duration: 86.795427ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-17T00:44:12.410334Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.431064ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-07-17T00:44:12.41077Z","caller":"traceutil/trace.go:171","msg":"trace[465819003] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:345; }","duration":"104.062641ms","start":"2024-07-17T00:44:12.306675Z","end":"2024-07-17T00:44:12.410737Z","steps":["trace[465819003] 'agreement among raft nodes before linearized reading'  (duration: 103.147829ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:44:13.802858Z","caller":"traceutil/trace.go:171","msg":"trace[1800545581] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"101.237482ms","start":"2024-07-17T00:44:13.701594Z","end":"2024-07-17T00:44:13.802832Z","steps":["trace[1800545581] 'process raft request'  (duration: 97.373277ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-17T00:44:13.924516Z","caller":"traceutil/trace.go:171","msg":"trace[1509167657] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"112.234349ms","start":"2024-07-17T00:44:13.812234Z","end":"2024-07-17T00:44:13.924468Z","steps":["trace[1509167657] 'process raft request'  (duration: 98.951501ms)","trace[1509167657] 'compare'  (duration: 13.081932ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-17T00:44:36.296696Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-17T00:44:36.296902Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-965000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-07-17T00:44:36.297016Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:44:36.297129Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:44:36.305272Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-17T00:44:36.305341Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-17T00:44:36.305401Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-07-17T00:44:36.413325Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-17T00:44:36.413688Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-17T00:44:36.413722Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-965000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 00:45:45 up  2:32,  0 users,  load average: 2.85, 1.85, 1.29
	Linux functional-965000 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [09cb8ab2213f] <==
	I0717 00:45:05.227894       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0717 00:45:05.227923       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0717 00:45:05.228869       1 aggregator.go:163] waiting for initial CRD sync...
	I0717 00:45:05.228884       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 00:45:05.228891       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0717 00:45:05.394733       1 shared_informer.go:320] Caches are synced for configmaps
	I0717 00:45:05.494375       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 00:45:05.495271       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0717 00:45:05.495407       1 policy_source.go:224] refreshing policies
	I0717 00:45:05.495431       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0717 00:45:05.495278       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0717 00:45:05.495367       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0717 00:45:05.495490       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0717 00:45:05.495528       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 00:45:05.495635       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0717 00:45:05.495661       1 aggregator.go:165] initial CRD sync complete...
	I0717 00:45:05.495679       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 00:45:05.495685       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 00:45:05.495693       1 cache.go:39] Caches are synced for autoregister controller
	I0717 00:45:05.499678       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 00:45:05.505666       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0717 00:45:05.794565       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0717 00:45:06.317915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 00:45:19.683366       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 00:45:19.782312       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [57cbcbc4f0c9] <==
	W0717 00:44:45.584042       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.616630       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.656148       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.657689       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.705052       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.726011       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.733627       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.811137       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.813700       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.833155       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.848769       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.943054       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.967773       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.968560       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:45.996420       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.069090       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.088007       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.111750       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.127953       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.224578       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.242989       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.284946       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.333502       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.383155       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 00:44:46.384583       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [527b5afbe5ab] <==
	I0717 00:44:10.830922       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:44:10.859905       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0717 00:44:10.860099       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0717 00:44:10.861503       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0717 00:44:10.861649       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0717 00:44:10.864356       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0717 00:44:10.876104       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:44:10.974114       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 00:44:11.395729       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:44:11.395852       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0717 00:44:11.407777       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:44:12.102026       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="559.80314ms"
	I0717 00:44:12.214577       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="112.455169ms"
	I0717 00:44:12.214827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.015µs"
	I0717 00:44:12.415223       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="249.431µs"
	I0717 00:44:13.998555       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="472.992691ms"
	I0717 00:44:14.018961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.277499ms"
	I0717 00:44:14.019195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="79.206µs"
	I0717 00:44:16.106072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="178.414µs"
	I0717 00:44:17.528876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.731761ms"
	I0717 00:44:17.529168       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="97.908µs"
	I0717 00:44:26.188772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="207.818µs"
	I0717 00:44:26.615280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.605µs"
	I0717 00:44:26.638330       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="96.808µs"
	I0717 00:44:26.651672       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.704µs"
	
	
	==> kube-controller-manager [a2e9c719ce98] <==
	I0717 00:45:19.593395       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"functional-965000\" does not exist"
	I0717 00:45:19.595649       1 shared_informer.go:320] Caches are synced for namespace
	I0717 00:45:19.597917       1 shared_informer.go:320] Caches are synced for PVC protection
	I0717 00:45:19.600571       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0717 00:45:19.614294       1 shared_informer.go:320] Caches are synced for daemon sets
	I0717 00:45:19.629059       1 shared_informer.go:320] Caches are synced for TTL
	I0717 00:45:19.633410       1 shared_informer.go:320] Caches are synced for attach detach
	I0717 00:45:19.644502       1 shared_informer.go:320] Caches are synced for node
	I0717 00:45:19.644606       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0717 00:45:19.644629       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0717 00:45:19.644635       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0717 00:45:19.644641       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0717 00:45:19.648502       1 shared_informer.go:320] Caches are synced for persistent volume
	I0717 00:45:19.657300       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0717 00:45:19.681319       1 shared_informer.go:320] Caches are synced for endpoint
	I0717 00:45:19.711554       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0717 00:45:19.731019       1 shared_informer.go:320] Caches are synced for taint
	I0717 00:45:19.731159       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0717 00:45:19.731219       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-965000"
	I0717 00:45:19.731264       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0717 00:45:19.758024       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:45:19.786564       1 shared_informer.go:320] Caches are synced for resource quota
	I0717 00:45:20.214952       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:45:20.278588       1 shared_informer.go:320] Caches are synced for garbage collector
	I0717 00:45:20.278748       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0d108b24c548] <==
	I0717 00:44:15.247164       1 server_linux.go:69] "Using iptables proxy"
	I0717 00:44:15.269084       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:44:15.336141       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:44:15.336269       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:44:15.341610       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:44:15.341738       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:44:15.341860       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:44:15.342610       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:44:15.342648       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:44:15.343768       1 config.go:192] "Starting service config controller"
	I0717 00:44:15.343871       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:44:15.344001       1 config.go:319] "Starting node config controller"
	I0717 00:44:15.344014       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:44:15.344050       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:44:15.344169       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:44:15.444443       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0717 00:44:15.444550       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:44:15.444556       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [4f27496d036c] <==
	I0717 00:45:00.604086       1 server_linux.go:69] "Using iptables proxy"
	E0717 00:45:00.608949       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0717 00:45:05.402037       1 server.go:1051] "Failed to retrieve node info" err="nodes \"functional-965000\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot get resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]"
	I0717 00:45:07.497279       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0717 00:45:07.618331       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0717 00:45:07.618761       1 server_linux.go:165] "Using iptables Proxier"
	I0717 00:45:07.626777       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0717 00:45:07.626881       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0717 00:45:07.626918       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 00:45:07.627505       1 server.go:872] "Version info" version="v1.30.2"
	I0717 00:45:07.627716       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:45:07.629240       1 config.go:192] "Starting service config controller"
	I0717 00:45:07.631225       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0717 00:45:07.630540       1 config.go:101] "Starting endpoint slice config controller"
	I0717 00:45:07.631253       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0717 00:45:07.630845       1 config.go:319] "Starting node config controller"
	I0717 00:45:07.631265       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0717 00:45:07.732430       1 shared_informer.go:320] Caches are synced for node config
	I0717 00:45:07.732540       1 shared_informer.go:320] Caches are synced for service config
	I0717 00:45:07.732564       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7b0296e0f642] <==
	E0717 00:43:55.222190       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 00:43:55.226746       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 00:43:55.226865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 00:43:55.240296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 00:43:55.240403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 00:43:55.286258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 00:43:55.286355       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 00:43:55.312246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 00:43:55.312299       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 00:43:55.330418       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 00:43:55.330550       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 00:43:55.342698       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 00:43:55.342807       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:43:55.423712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 00:43:55.423820       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 00:43:55.528631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 00:43:55.528747       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0717 00:43:55.569864       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 00:43:55.570132       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 00:43:55.613840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 00:43:55.613944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0717 00:43:57.618624       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 00:44:36.297607       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 00:44:36.298229       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0717 00:44:36.297620       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c259a18fc1c0] <==
	I0717 00:45:02.909963       1 serving.go:380] Generated self-signed cert in-memory
	W0717 00:45:05.395303       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 00:45:05.395590       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 00:45:05.395818       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 00:45:05.396087       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 00:45:05.509247       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0717 00:45:05.509686       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 00:45:05.594829       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 00:45:05.595113       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 00:45:05.595254       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0717 00:45:05.595948       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 00:45:05.695915       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.512224    2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba77f2eed4cd1a77040d12792c9da337c04e155d2699ed7884832b6cfae1d642"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.513195    2673 status_manager.go:853] "Failed to get status for pod" podUID="ea288c3d0d4f9ec59e0cc124b9c0c2c4" pod="kube-system/kube-scheduler-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.513917    2673 status_manager.go:853] "Failed to get status for pod" podUID="1f96f42d4c4d9ac651471a09c85d1277" pod="kube-system/kube-apiserver-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.514342    2673 status_manager.go:853] "Failed to get status for pod" podUID="3823ded6d2fcdfc98e53ff27b6721e6a" pod="kube-system/etcd-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.514757    2673 status_manager.go:853] "Failed to get status for pod" podUID="a70bc62f5b603e4a3c252518143d0b34" pod="kube-system/kube-controller-manager-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.515140    2673 status_manager.go:853] "Failed to get status for pod" podUID="d6190a0c-9716-4686-85dc-6033bd40a184" pod="kube-system/kube-proxy-jsqf2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-jsqf2\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.515609    2673 status_manager.go:853] "Failed to get status for pod" podUID="68da0227-f0f9-4feb-a17c-6282b313b353" pod="kube-system/coredns-7db6d8ff4d-wz2jh" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wz2jh\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.516103    2673 status_manager.go:853] "Failed to get status for pod" podUID="c0f5142c-ffc2-469e-b7eb-75daaf5247cc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.611476    2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc56fd930e05abbd1d7a2b81caa46b007abe183b8fccb7ccf63cba449c4b6c0"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.613251    2673 status_manager.go:853] "Failed to get status for pod" podUID="68da0227-f0f9-4feb-a17c-6282b313b353" pod="kube-system/coredns-7db6d8ff4d-wz2jh" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wz2jh\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.614283    2673 status_manager.go:853] "Failed to get status for pod" podUID="c0f5142c-ffc2-469e-b7eb-75daaf5247cc" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.614979    2673 status_manager.go:853] "Failed to get status for pod" podUID="ea288c3d0d4f9ec59e0cc124b9c0c2c4" pod="kube-system/kube-scheduler-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.615597    2673 status_manager.go:853] "Failed to get status for pod" podUID="1f96f42d4c4d9ac651471a09c85d1277" pod="kube-system/kube-apiserver-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.616532    2673 status_manager.go:853] "Failed to get status for pod" podUID="3823ded6d2fcdfc98e53ff27b6721e6a" pod="kube-system/etcd-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.616913    2673 status_manager.go:853] "Failed to get status for pod" podUID="a70bc62f5b603e4a3c252518143d0b34" pod="kube-system/kube-controller-manager-functional-965000" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-965000\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.617176    2673 status_manager.go:853] "Failed to get status for pod" podUID="d6190a0c-9716-4686-85dc-6033bd40a184" pod="kube-system/kube-proxy-jsqf2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-jsqf2\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.708702    2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1f821190ea5ce5c89f65af6c1fda3f8aaf153788970ca50815402fb3884293b"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: I0717 00:45:01.709090    2673 scope.go:117] "RemoveContainer" containerID="c9a465ae07a22eda472a8ccb6d5bd31fcc19c18dfe838923c8d905be37d597e4"
	Jul 17 00:45:01 functional-965000 kubelet[2673]: E0717 00:45:01.709396    2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c0f5142c-ffc2-469e-b7eb-75daaf5247cc)\"" pod="kube-system/storage-provisioner" podUID="c0f5142c-ffc2-469e-b7eb-75daaf5247cc"
	Jul 17 00:45:02 functional-965000 kubelet[2673]: I0717 00:45:02.800063    2673 scope.go:117] "RemoveContainer" containerID="ab459f5c0da7bf40df9f13c6661aff7b2a7fde39979a760d893f283df20f5536"
	Jul 17 00:45:02 functional-965000 kubelet[2673]: I0717 00:45:02.800525    2673 scope.go:117] "RemoveContainer" containerID="c9a465ae07a22eda472a8ccb6d5bd31fcc19c18dfe838923c8d905be37d597e4"
	Jul 17 00:45:02 functional-965000 kubelet[2673]: E0717 00:45:02.800952    2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c0f5142c-ffc2-469e-b7eb-75daaf5247cc)\"" pod="kube-system/storage-provisioner" podUID="c0f5142c-ffc2-469e-b7eb-75daaf5247cc"
	Jul 17 00:45:05 functional-965000 kubelet[2673]: E0717 00:45:05.240826    2673 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 17 00:45:05 functional-965000 kubelet[2673]: E0717 00:45:05.294628    2673 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 17 00:45:16 functional-965000 kubelet[2673]: I0717 00:45:16.716138    2673 scope.go:117] "RemoveContainer" containerID="c9a465ae07a22eda472a8ccb6d5bd31fcc19c18dfe838923c8d905be37d597e4"
	
	
	==> storage-provisioner [bbde407c8841] <==
	I0717 00:45:17.023337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 00:45:17.038173       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 00:45:17.038406       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 00:45:34.450972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 00:45:34.451278       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"31b1d0d4-9ce0-4669-b7c4-4b5e1548d15e", APIVersion:"v1", ResourceVersion:"509", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-965000_e435e8aa-eeb5-4fa4-a81d-9b37d6a85f57 became leader
	I0717 00:45:34.451447       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-965000_e435e8aa-eeb5-4fa4-a81d-9b37d6a85f57!
	I0717 00:45:34.552935       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-965000_e435e8aa-eeb5-4fa4-a81d-9b37d6a85f57!
	
	
	==> storage-provisioner [c9a465ae07a2] <==
	I0717 00:44:59.894504       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 00:44:59.900105       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:45:43.512075   13696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-965000 -n functional-965000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-965000 -n functional-965000: (1.377422s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-965000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (6.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config unset cpus" to be -""- but got *"W0717 00:46:52.609412   13968 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 config get cpus: exit status 14 (332.7867ms)

                                                
                                                
** stderr ** 
	W0717 00:46:53.007550   11240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0717 00:46:53.007550   11240 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0717 00:46:53.368486    9608 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config get cpus" to be -""- but got *"W0717 00:46:53.696422   10224 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config unset cpus" to be -""- but got *"W0717 00:46:54.039042    5776 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 config get cpus: exit status 14 (294.5999ms)

                                                
                                                
** stderr ** 
	W0717 00:46:54.361711   15036 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-965000 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0717 00:46:54.361711   15036 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube3\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (2.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (446.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-556100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-556100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 80 (7m14.0788868s)

                                                
                                                
-- stdout --
	* [old-k8s-version-556100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-556100" primary control-plane node in "old-k8s-version-556100" cluster
	* Pulling base image v0.0.44-1721146479-19264 ...
	* Restarting existing docker container for "old-k8s-version-556100" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.0.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-556100 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:04:41.859451   10936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 02:04:41.972214   10936 out.go:291] Setting OutFile to fd 1532 ...
	I0717 02:04:41.973225   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:04:41.973225   10936 out.go:304] Setting ErrFile to fd 956...
	I0717 02:04:41.973225   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:04:42.009223   10936 out.go:298] Setting JSON to false
	I0717 02:04:42.013220   10936 start.go:129] hostinfo: {"hostname":"minikube3","uptime":13897,"bootTime":1721167984,"procs":198,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 02:04:42.014238   10936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 02:04:42.020224   10936 out.go:177] * [old-k8s-version-556100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 02:04:42.026208   10936 notify.go:220] Checking for updates...
	I0717 02:04:42.030237   10936 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 02:04:42.033224   10936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:04:42.036207   10936 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 02:04:42.039225   10936 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:04:42.042197   10936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:04:42.045213   10936 config.go:182] Loaded profile config "old-k8s-version-556100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 02:04:42.048217   10936 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0717 02:04:42.050204   10936 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:04:42.468112   10936 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 02:04:42.484090   10936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 02:04:42.881526   10936 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:true NGoroutines:95 SystemTime:2024-07-17 02:04:42.826591574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 02:04:42.889435   10936 out.go:177] * Using the docker driver based on existing profile
	I0717 02:04:42.892103   10936 start.go:297] selected driver: docker
	I0717 02:04:42.892103   10936 start.go:901] validating driver "docker" against &{Name:old-k8s-version-556100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556100 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:04:42.892380   10936 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:04:42.972390   10936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 02:04:43.379026   10936 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:87 OomKillDisable:true NGoroutines:95 SystemTime:2024-07-17 02:04:43.336785642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 02:04:43.380021   10936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:04:43.380021   10936 cni.go:84] Creating CNI manager for ""
	I0717 02:04:43.380021   10936 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 02:04:43.380021   10936 start.go:340] cluster config:
	{Name:old-k8s-version-556100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:04:43.385030   10936 out.go:177] * Starting "old-k8s-version-556100" primary control-plane node in "old-k8s-version-556100" cluster
	I0717 02:04:43.388043   10936 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 02:04:43.391020   10936 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 02:04:43.394021   10936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 02:04:43.394021   10936 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 02:04:43.394021   10936 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0717 02:04:43.394021   10936 cache.go:56] Caching tarball of preloaded images
	I0717 02:04:43.395048   10936 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 02:04:43.395048   10936 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 02:04:43.395048   10936 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\config.json ...
	W0717 02:04:43.624119   10936 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e is of wrong architecture
	I0717 02:04:43.624119   10936 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 02:04:43.624119   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:04:43.624119   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:04:43.624119   10936 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 02:04:43.624119   10936 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 02:04:43.624119   10936 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 02:04:43.624119   10936 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 02:04:43.624119   10936 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 02:04:43.625122   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:04:44.198953   10936 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 02:04:44.198953   10936 cache.go:194] Successfully downloaded all kic artifacts
	I0717 02:04:44.198953   10936 start.go:360] acquireMachinesLock for old-k8s-version-556100: {Name:mk673c1fe442eb5a01f8136a645a7b01a30ef6fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:04:44.198953   10936 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-556100"
	I0717 02:04:44.199962   10936 start.go:96] Skipping create...Using existing machine configuration
	I0717 02:04:44.199962   10936 fix.go:54] fixHost starting: 
	I0717 02:04:44.226949   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:04:44.438945   10936 fix.go:112] recreateIfNeeded on old-k8s-version-556100: state=Stopped err=<nil>
	W0717 02:04:44.438945   10936 fix.go:138] unexpected machine state, will restart: <nil>
	I0717 02:04:44.443956   10936 out.go:177] * Restarting existing docker container for "old-k8s-version-556100" ...
	I0717 02:04:44.457953   10936 cli_runner.go:164] Run: docker start old-k8s-version-556100
	I0717 02:04:47.402324   10936 cli_runner.go:217] Completed: docker start old-k8s-version-556100: (2.9443455s)
	I0717 02:04:47.418320   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:04:47.653204   10936 kic.go:430] container "old-k8s-version-556100" state is running.
	I0717 02:04:47.665193   10936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556100
	I0717 02:04:47.940627   10936 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\config.json ...
	I0717 02:04:47.944591   10936 machine.go:94] provisionDockerMachine start ...
	I0717 02:04:47.959581   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:48.216584   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:48.217588   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:48.217588   10936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:04:48.220603   10936 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 02:04:51.434957   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556100
	
	I0717 02:04:51.434957   10936 ubuntu.go:169] provisioning hostname "old-k8s-version-556100"
	I0717 02:04:51.445974   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:51.672928   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:51.673925   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:51.673925   10936 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-556100 && echo "old-k8s-version-556100" | sudo tee /etc/hostname
	I0717 02:04:51.943736   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-556100
	
	I0717 02:04:51.958705   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:52.232458   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:52.233504   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:52.233504   10936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-556100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-556100/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-556100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:04:52.457973   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:04:52.457973   10936 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0717 02:04:52.457973   10936 ubuntu.go:177] setting up certificates
	I0717 02:04:52.457973   10936 provision.go:84] configureAuth start
	I0717 02:04:52.475977   10936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556100
	I0717 02:04:52.743975   10936 provision.go:143] copyHostCerts
	I0717 02:04:52.743975   10936 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0717 02:04:52.743975   10936 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0717 02:04:52.744989   10936 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0717 02:04:52.746997   10936 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0717 02:04:52.746997   10936 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0717 02:04:52.747982   10936 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0717 02:04:52.749983   10936 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0717 02:04:52.749983   10936 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0717 02:04:52.750993   10936 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0717 02:04:52.751987   10936 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-556100 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-556100]
	I0717 02:04:52.869295   10936 provision.go:177] copyRemoteCerts
	I0717 02:04:52.893331   10936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:04:52.910316   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:53.141291   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:04:53.291305   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:04:53.351293   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0717 02:04:53.422300   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 02:04:53.490325   10936 provision.go:87] duration metric: took 1.0323429s to configureAuth
	I0717 02:04:53.490325   10936 ubuntu.go:193] setting minikube options for container-runtime
	I0717 02:04:53.490325   10936 config.go:182] Loaded profile config "old-k8s-version-556100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 02:04:53.508306   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:53.759962   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:53.759962   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:53.759962   10936 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 02:04:53.989601   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 02:04:53.989601   10936 ubuntu.go:71] root file system type: overlay
	I0717 02:04:53.989902   10936 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 02:04:54.008482   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:54.228645   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:54.228645   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:54.228645   10936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 02:04:54.554855   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 02:04:54.567836   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:54.782706   10936 main.go:141] libmachine: Using SSH client type: native
	I0717 02:04:54.782706   10936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51025 <nil> <nil>}
	I0717 02:04:54.782706   10936 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 02:04:55.006134   10936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:04:55.006134   10936 machine.go:97] duration metric: took 7.0614825s to provisionDockerMachine
	I0717 02:04:55.006134   10936 start.go:293] postStartSetup for "old-k8s-version-556100" (driver="docker")
	I0717 02:04:55.006134   10936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 02:04:55.029140   10936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 02:04:55.042146   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:55.251662   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:04:55.416670   10936 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 02:04:55.429164   10936 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 02:04:55.429164   10936 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 02:04:55.429164   10936 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 02:04:55.429164   10936 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 02:04:55.429164   10936 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0717 02:04:55.430134   10936 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0717 02:04:55.431142   10936 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem -> 77122.pem in /etc/ssl/certs
	I0717 02:04:55.453162   10936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 02:04:55.478414   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /etc/ssl/certs/77122.pem (1708 bytes)
	I0717 02:04:55.532400   10936 start.go:296] duration metric: took 526.2616ms for postStartSetup
	I0717 02:04:55.548405   10936 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 02:04:55.562410   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:55.777203   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:04:55.952814   10936 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 02:04:55.964882   10936 fix.go:56] duration metric: took 11.7648186s for fixHost
	I0717 02:04:55.964882   10936 start.go:83] releasing machines lock for "old-k8s-version-556100", held for 11.7658284s
	I0717 02:04:55.974889   10936 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-556100
	I0717 02:04:56.218333   10936 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0717 02:04:56.233329   10936 ssh_runner.go:195] Run: cat /version.json
	I0717 02:04:56.234325   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:56.246339   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:04:56.448027   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:04:56.473501   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	W0717 02:04:56.588444   10936 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0717 02:04:56.626179   10936 ssh_runner.go:195] Run: systemctl --version
	I0717 02:04:56.648790   10936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 02:04:56.682317   10936 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0717 02:04:56.691320   10936 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0717 02:04:56.691320   10936 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	W0717 02:04:56.705320   10936 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0717 02:04:56.718323   10936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 02:04:56.765335   10936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 02:04:56.806710   10936 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 02:04:56.806710   10936 start.go:495] detecting cgroup driver to use...
	I0717 02:04:56.806831   10936 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 02:04:56.807111   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 02:04:56.854243   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0717 02:04:56.897469   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 02:04:56.918470   10936 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 02:04:56.931481   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 02:04:56.975807   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 02:04:57.103299   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 02:04:57.148844   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 02:04:57.224046   10936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 02:04:57.261956   10936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 02:04:57.306693   10936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 02:04:57.347925   10936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 02:04:57.388973   10936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:04:57.564698   10936 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 02:04:57.776482   10936 start.go:495] detecting cgroup driver to use...
	I0717 02:04:57.777871   10936 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 02:04:57.797644   10936 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 02:04:57.830624   10936 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0717 02:04:57.847773   10936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 02:04:57.875042   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 02:04:57.934769   10936 ssh_runner.go:195] Run: which cri-dockerd
	I0717 02:04:57.966883   10936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 02:04:57.990107   10936 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 02:04:58.059784   10936 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 02:04:58.253150   10936 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 02:04:58.426138   10936 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 02:04:58.426358   10936 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 02:04:58.483286   10936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:04:58.697975   10936 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 02:05:01.234763   10936 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5366817s)
	I0717 02:05:01.246788   10936 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 02:05:01.323154   10936 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 02:05:01.391196   10936 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 27.0.3 ...
	I0717 02:05:01.404010   10936 cli_runner.go:164] Run: docker exec -t old-k8s-version-556100 dig +short host.docker.internal
	I0717 02:05:01.736616   10936 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 02:05:01.749617   10936 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 02:05:01.764076   10936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 02:05:01.816640   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:02.034041   10936 kubeadm.go:883] updating cluster {Name:old-k8s-version-556100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556100 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jen
kins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0717 02:05:02.035033   10936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 02:05:02.045058   10936 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 02:05:02.098029   10936 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 02:05:02.098029   10936 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0717 02:05:02.122509   10936 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 02:05:02.162582   10936 ssh_runner.go:195] Run: which lz4
	I0717 02:05:02.205355   10936 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 02:05:02.221252   10936 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 02:05:02.221252   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0717 02:05:17.746123   10936 docker.go:649] duration metric: took 15.5675485s to copy over tarball
	I0717 02:05:17.766513   10936 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 02:05:25.149563   10936 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (7.3810041s)
	I0717 02:05:25.149563   10936 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 02:05:25.261968   10936 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 02:05:25.289479   10936 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0717 02:05:25.346944   10936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:05:25.526040   10936 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 02:05:39.829539   10936 ssh_runner.go:235] Completed: sudo systemctl restart docker: (14.303376s)
	I0717 02:05:39.839531   10936 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 02:05:39.885168   10936 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 02:05:39.885168   10936 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0717 02:05:39.885168   10936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 02:05:39.900413   10936 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:05:39.907155   10936 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 02:05:39.913163   10936 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 02:05:39.916165   10936 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:05:39.921180   10936 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 02:05:39.921180   10936 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 02:05:39.928169   10936 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 02:05:39.933151   10936 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 02:05:39.942165   10936 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0717 02:05:39.942165   10936 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 02:05:39.951179   10936 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 02:05:39.952158   10936 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 02:05:39.956164   10936 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0717 02:05:39.958184   10936 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0717 02:05:39.969181   10936 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 02:05:39.976170   10936 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	W0717 02:05:40.056001   10936 image.go:187] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0717 02:05:40.168615   10936 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0717 02:05:40.278721   10936 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0717 02:05:40.389721   10936 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0717 02:05:40.429723   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	W0717 02:05:40.514274   10936 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0717 02:05:40.639588   10936 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0717 02:05:40.687578   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0717 02:05:40.721576   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 02:05:40.731582   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0717 02:05:40.759560   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0717 02:05:40.772568   10936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0717 02:05:40.772568   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0717 02:05:40.772568   10936 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0717 02:05:40.779570   10936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0717 02:05:40.779570   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0717 02:05:40.779570   10936 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	W0717 02:05:40.781560   10936 image.go:187] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0717 02:05:40.789564   10936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0717 02:05:40.789564   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0717 02:05:40.789564   10936 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0717 02:05:40.790579   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0717 02:05:40.796576   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0717 02:05:40.807575   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0717 02:05:40.834577   10936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0717 02:05:40.834577   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0717 02:05:40.834577   10936 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0717 02:05:40.848590   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0717 02:05:40.885572   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0717 02:05:40.893585   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0717 02:05:40.903603   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0717 02:05:40.913561   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0717 02:05:40.922565   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	W0717 02:05:40.926570   10936 image.go:187] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0717 02:05:40.993575   10936 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0717 02:05:40.993575   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0717 02:05:40.993575   10936 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0717 02:05:41.002577   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 02:05:41.008580   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0717 02:05:41.055449   10936 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 02:05:41.055449   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0717 02:05:41.055449   10936 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0717 02:05:41.058442   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0717 02:05:41.074425   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0717 02:05:41.146969   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0717 02:05:41.200942   10936 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0717 02:05:41.279941   10936 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0717 02:05:41.279941   10936 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0717 02:05:41.279941   10936 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0717 02:05:41.298956   10936 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0717 02:05:41.362943   10936 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0717 02:05:41.362943   10936 cache_images.go:92] duration metric: took 1.4777622s to LoadCachedImages
	W0717 02:05:41.362943   10936 out.go:239] X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0: The system cannot find the file specified.
	I0717 02:05:41.362943   10936 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0717 02:05:41.362943   10936 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-556100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0717 02:05:41.381990   10936 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 02:05:41.555945   10936 cni.go:84] Creating CNI manager for ""
	I0717 02:05:41.555945   10936 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 02:05:41.556950   10936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0717 02:05:41.556950   10936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-556100 NodeName:old-k8s-version-556100 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 02:05:41.556950   10936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-556100"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 02:05:41.581958   10936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0717 02:05:41.617967   10936 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 02:05:41.638978   10936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 02:05:41.668956   10936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0717 02:05:41.725970   10936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 02:05:41.782953   10936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0717 02:05:41.853962   10936 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 02:05:41.869945   10936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 02:05:41.935955   10936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:05:42.164946   10936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:05:42.211959   10936 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100 for IP: 192.168.76.2
	I0717 02:05:42.212951   10936 certs.go:194] generating shared ca certs ...
	I0717 02:05:42.212951   10936 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:05:42.214010   10936 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0717 02:05:42.214010   10936 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0717 02:05:42.214010   10936 certs.go:256] generating profile certs ...
	I0717 02:05:42.214969   10936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.key
	I0717 02:05:42.215949   10936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\apiserver.key.602b9f31
	I0717 02:05:42.215949   10936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\proxy-client.key
	I0717 02:05:42.218975   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem (1338 bytes)
	W0717 02:05:42.218975   10936 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712_empty.pem, impossibly tiny 0 bytes
	I0717 02:05:42.218975   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0717 02:05:42.218975   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0717 02:05:42.219961   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0717 02:05:42.219961   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0717 02:05:42.220958   10936 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem (1708 bytes)
	I0717 02:05:42.222950   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 02:05:42.295853   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 02:05:42.374871   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 02:05:42.462871   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0717 02:05:42.539874   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0717 02:05:42.663868   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 02:05:42.748846   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 02:05:42.835859   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 02:05:42.944852   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem --> /usr/share/ca-certificates/7712.pem (1338 bytes)
	I0717 02:05:43.075833   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /usr/share/ca-certificates/77122.pem (1708 bytes)
	I0717 02:05:43.147841   10936 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 02:05:43.212842   10936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 02:05:43.309540   10936 ssh_runner.go:195] Run: openssl version
	I0717 02:05:43.394531   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 02:05:43.509175   10936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:05:43.562144   10936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:05:43.587172   10936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:05:43.688151   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 02:05:43.746184   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7712.pem && ln -fs /usr/share/ca-certificates/7712.pem /etc/ssl/certs/7712.pem"
	I0717 02:05:43.824551   10936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7712.pem
	I0717 02:05:43.836571   10936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:42 /usr/share/ca-certificates/7712.pem
	I0717 02:05:43.860553   10936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7712.pem
	I0717 02:05:43.907580   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7712.pem /etc/ssl/certs/51391683.0"
	I0717 02:05:43.964784   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77122.pem && ln -fs /usr/share/ca-certificates/77122.pem /etc/ssl/certs/77122.pem"
	I0717 02:05:44.022191   10936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77122.pem
	I0717 02:05:44.034191   10936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:42 /usr/share/ca-certificates/77122.pem
	I0717 02:05:44.049199   10936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77122.pem
	I0717 02:05:44.086833   10936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77122.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 02:05:44.142805   10936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 02:05:44.180981   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 02:05:44.227977   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 02:05:44.260968   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 02:05:44.323971   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 02:05:44.393026   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 02:05:44.435958   10936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 02:05:44.481977   10936 kubeadm.go:392] StartCluster: {Name:old-k8s-version-556100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-556100 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkin
s.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:05:44.498972   10936 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 02:05:44.692970   10936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 02:05:44.779990   10936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0717 02:05:44.780978   10936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0717 02:05:44.807019   10936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 02:05:44.843020   10936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 02:05:44.862977   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:45.198002   10936 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-556100" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 02:05:45.199974   10936 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-556100" cluster setting kubeconfig missing "old-k8s-version-556100" context setting]
	I0717 02:05:45.201958   10936 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:05:45.253964   10936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 02:05:45.284987   10936 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0717 02:05:45.284987   10936 kubeadm.go:597] duration metric: took 504.0046ms to restartPrimaryControlPlane
	I0717 02:05:45.284987   10936 kubeadm.go:394] duration metric: took 804.0232ms to StartCluster
	I0717 02:05:45.284987   10936 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:05:45.284987   10936 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 02:05:45.288965   10936 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 02:05:45.290971   10936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 02:05:45.290971   10936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0717 02:05:45.290971   10936 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-556100"
	I0717 02:05:45.290971   10936 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-556100"
	I0717 02:05:45.291967   10936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-556100"
	I0717 02:05:45.291967   10936 config.go:182] Loaded profile config "old-k8s-version-556100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 02:05:45.290971   10936 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-556100"
	W0717 02:05:45.291967   10936 addons.go:243] addon storage-provisioner should already be in state true
	I0717 02:05:45.291967   10936 host.go:66] Checking if "old-k8s-version-556100" exists ...
	I0717 02:05:45.290971   10936 addons.go:69] Setting dashboard=true in profile "old-k8s-version-556100"
	I0717 02:05:45.291967   10936 addons.go:234] Setting addon dashboard=true in "old-k8s-version-556100"
	W0717 02:05:45.291967   10936 addons.go:243] addon dashboard should already be in state true
	I0717 02:05:45.290971   10936 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-556100"
	I0717 02:05:45.293008   10936 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-556100"
	W0717 02:05:45.293008   10936 addons.go:243] addon metrics-server should already be in state true
	I0717 02:05:45.293008   10936 host.go:66] Checking if "old-k8s-version-556100" exists ...
	I0717 02:05:45.293008   10936 host.go:66] Checking if "old-k8s-version-556100" exists ...
	I0717 02:05:45.305306   10936 out.go:177] * Verifying Kubernetes components...
	I0717 02:05:45.336993   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:05:45.341034   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:05:45.343002   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:05:45.346045   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:05:45.370994   10936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:05:45.669915   10936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 02:05:45.672896   10936 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-556100"
	W0717 02:05:45.672896   10936 addons.go:243] addon default-storageclass should already be in state true
	I0717 02:05:45.672896   10936 host.go:66] Checking if "old-k8s-version-556100" exists ...
	I0717 02:05:45.692915   10936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:45.692915   10936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 02:05:45.748886   10936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 02:05:45.709891   10936 cli_runner.go:164] Run: docker container inspect old-k8s-version-556100 --format={{.State.Status}}
	I0717 02:05:45.709891   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:45.721887   10936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0717 02:05:45.806905   10936 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 02:05:45.826887   10936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 02:05:45.826887   10936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 02:05:45.848895   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:45.878892   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:45.899891   10936 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 02:05:45.913893   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 02:05:45.914885   10936 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 02:05:45.942264   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:46.155060   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:05:46.176359   10936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:46.176359   10936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 02:05:46.176359   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:05:46.187353   10936 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-556100" to be "Ready" ...
	I0717 02:05:46.194350   10936 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-556100
	I0717 02:05:46.282353   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:05:46.391359   10936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 02:05:46.391359   10936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 02:05:46.395353   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:46.422333   10936 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51025 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\old-k8s-version-556100\id_rsa Username:docker}
	I0717 02:05:46.442348   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 02:05:46.442348   10936 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 02:05:46.464404   10936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 02:05:46.464404   10936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 02:05:46.513006   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 02:05:46.513006   10936 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 02:05:46.562996   10936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:05:46.562996   10936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 02:05:46.675783   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 02:05:46.675783   10936 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0717 02:05:46.692769   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:46.693786   10936 retry.go:31] will retry after 216.693652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:46.697768   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:46.703770   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:05:46.768257   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 02:05:46.768257   10936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 02:05:46.888342   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 02:05:46.888342   10936 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 02:05:46.938319   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:47.082983   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 02:05:47.083978   10936 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 02:05:47.196544   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 02:05:47.196544   10936 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0717 02:05:47.205603   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.205603   10936 retry.go:31] will retry after 339.996716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:47.205603   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.205603   10936 retry.go:31] will retry after 334.009973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.304552   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 02:05:47.304552   10936 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0717 02:05:47.336556   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.336556   10936 retry.go:31] will retry after 296.329774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.366532   10936 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:05:47.366532   10936 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 02:05:47.449539   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:05:47.576541   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:05:47.578547   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:47.651547   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 02:05:47.671547   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.671547   10936 retry.go:31] will retry after 289.029916ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:47.882575   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:47.882575   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.882575   10936 retry.go:31] will retry after 508.04567ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.882575   10936 retry.go:31] will retry after 477.897424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:47.927548   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.927548   10936 retry.go:31] will retry after 605.901395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:47.990575   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0717 02:05:48.135953   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.135953   10936 retry.go:31] will retry after 260.317058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.397230   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:05:48.415227   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:48.420270   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:05:48.564250   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 02:05:48.674372   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.674372   10936 retry.go:31] will retry after 481.626312ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:48.688360   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.688360   10936 retry.go:31] will retry after 770.328731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:48.688360   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.688360   10936 retry.go:31] will retry after 692.731494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:48.797552   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:48.797552   10936 retry.go:31] will retry after 562.931625ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.177055   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0717 02:05:49.359525   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.359525   10936 retry.go:31] will retry after 1.038988453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.378514   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:49.405534   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:49.496009   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0717 02:05:49.594015   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.594015   10936 retry.go:31] will retry after 947.943543ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:49.768078   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.768078   10936 retry.go:31] will retry after 995.925002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:49.889137   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:49.889137   10936 retry.go:31] will retry after 498.765815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:50.423886   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:05:50.423886   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:05:50.578222   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:50.796123   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0717 02:05:50.893885   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:50.893885   10936 retry.go:31] will retry after 1.879397618s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:50.893885   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:50.893885   10936 retry.go:31] will retry after 861.433408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:51.074119   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:51.074119   10936 retry.go:31] will retry after 1.17304461s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0717 02:05:51.194582   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:51.194582   10936 retry.go:31] will retry after 1.173980755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 02:05:51.785899   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:05:52.271983   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 02:05:52.399882   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0717 02:05:52.806859   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 02:06:02.489049   10936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.702967s)
	W0717 02:06:02.489140   10936 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0717 02:06:02.489260   10936 retry.go:31] will retry after 1.613060477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0717 02:06:03.566369   10936 node_ready.go:49] node "old-k8s-version-556100" has status "Ready":"True"
	I0717 02:06:03.566369   10936 node_ready.go:38] duration metric: took 17.3788659s for node "old-k8s-version-556100" to be "Ready" ...
	I0717 02:06:03.566369   10936 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:06:04.121989   10936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 02:06:05.624561   10936 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-dmjwg" in "kube-system" namespace to be "Ready" ...
	I0717 02:06:06.867069   10936 pod_ready.go:92] pod "coredns-74ff55c5b-dmjwg" in "kube-system" namespace has status "Ready":"True"
	I0717 02:06:06.867069   10936 pod_ready.go:81] duration metric: took 1.2424968s for pod "coredns-74ff55c5b-dmjwg" in "kube-system" namespace to be "Ready" ...
	I0717 02:06:06.867069   10936 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:06:09.384441   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:10.428371   10936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (18.0283324s)
	I0717 02:06:10.428371   10936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (18.1562311s)
	I0717 02:06:12.764396   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:12.778962   10936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (19.9719301s)
	I0717 02:06:12.778962   10936 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-556100"
	I0717 02:06:15.012638   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:15.703109   10936 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.581019s)
	I0717 02:06:15.711098   10936 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-556100 addons enable metrics-server
	
	I0717 02:06:15.720125   10936 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0717 02:06:15.735580   10936 addons.go:510] duration metric: took 30.4443458s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0717 02:06:17.487647   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:19.896532   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:21.991440   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:24.382748   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:26.383230   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:28.396671   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:30.398638   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:32.888893   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:34.893177   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:37.388948   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:39.897187   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:42.391900   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:44.398139   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:46.958782   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:49.650949   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:51.933972   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:54.385890   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:56.396427   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:06:58.893614   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:00.900048   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:03.396608   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:05.897847   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:08.396459   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:11.165571   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:13.383277   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:15.389340   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:17.888550   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:19.899444   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:22.388569   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:24.393611   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:26.396591   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:28.889026   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:30.899570   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:33.396276   10936 pod_ready.go:102] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:33.898569   10936 pod_ready.go:92] pod "etcd-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"True"
	I0717 02:07:33.898569   10936 pod_ready.go:81] duration metric: took 1m27.0307425s for pod "etcd-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:33.898569   10936 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:33.911651   10936 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"True"
	I0717 02:07:33.911651   10936 pod_ready.go:81] duration metric: took 13.0821ms for pod "kube-apiserver-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:33.911651   10936 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:35.947375   10936 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:38.437957   10936 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:40.937566   10936 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:41.935661   10936 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"True"
	I0717 02:07:41.935661   10936 pod_ready.go:81] duration metric: took 8.0239399s for pod "kube-controller-manager-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:41.935661   10936 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bjvpb" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:41.952293   10936 pod_ready.go:92] pod "kube-proxy-bjvpb" in "kube-system" namespace has status "Ready":"True"
	I0717 02:07:41.952293   10936 pod_ready.go:81] duration metric: took 16.6323ms for pod "kube-proxy-bjvpb" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:41.952293   10936 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:41.964939   10936 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-556100" in "kube-system" namespace has status "Ready":"True"
	I0717 02:07:41.964939   10936 pod_ready.go:81] duration metric: took 12.6454ms for pod "kube-scheduler-old-k8s-version-556100" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:41.964939   10936 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace to be "Ready" ...
	I0717 02:07:43.981841   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:45.986062   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:48.485820   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:50.489974   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:52.497413   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:54.983659   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:57.481535   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:07:59.497905   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:01.991174   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:04.490991   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:06.495492   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:08.982803   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:10.983658   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:13.495140   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:15.985865   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:18.484352   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:20.490612   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:22.981676   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:24.990442   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:27.486121   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:29.500592   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:31.996895   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:34.024038   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:36.486122   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:38.489488   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:40.491104   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:42.496865   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:44.983353   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:46.985713   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:48.990708   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:51.502352   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:53.988515   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:56.490749   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:08:58.982407   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:01.492996   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:03.995242   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:06.505854   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:08.981614   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:10.988967   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:13.495170   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:15.498046   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:17.499054   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:19.995962   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:22.492656   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:24.526668   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:26.996747   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:29.526008   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:31.994441   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:34.488669   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:36.502602   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:38.987546   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:40.994583   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:43.001019   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:45.504578   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:47.987136   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:49.994660   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:51.999599   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:54.615563   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:57.012801   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:09:59.493471   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:01.508348   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:04.521205   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:06.993186   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:08.995401   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:11.002970   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:13.487557   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:15.495623   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:17.508666   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:19.997904   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:22.000004   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:24.003610   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:31.865136   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:34.122182   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:36.133052   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:38.158366   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:40.490840   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:43.087239   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:45.499203   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:47.503888   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:50.001126   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:52.503844   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:55.055585   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:57.497162   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:10:59.506743   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:01.507042   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:04.011415   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:06.134554   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:08.510201   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:10.681828   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:13.000627   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:15.001757   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:17.114717   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:19.849867   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:22.007104   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:24.489428   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:26.898008   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:29.292866   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:31.503703   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:33.991793   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:36.002429   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:38.011901   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:40.491992   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:41.974141   10936 pod_ready.go:81] duration metric: took 4m0.0070389s for pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace to be "Ready" ...
	E0717 02:11:41.974141   10936 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:11:41.974141   10936 pod_ready.go:38] duration metric: took 5m38.4048158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:11:41.974303   10936 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:11:41.985881   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 02:11:42.065477   10936 logs.go:276] 2 containers: [f1fe02b4a78a e030f8ebfbdb]
	I0717 02:11:42.078231   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 02:11:42.151268   10936 logs.go:276] 2 containers: [0c11f86257e5 ba8f75033d8c]
	I0717 02:11:42.172374   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 02:11:42.247650   10936 logs.go:276] 2 containers: [a36c9df36a2e 58914faf6333]
	I0717 02:11:42.261655   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 02:11:42.349030   10936 logs.go:276] 2 containers: [b5b115665dd7 66ed77ac46f7]
	I0717 02:11:42.365722   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 02:11:42.465419   10936 logs.go:276] 2 containers: [f60d408f22bf 0d09bed0c3c5]
	I0717 02:11:42.483868   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 02:11:42.555634   10936 logs.go:276] 2 containers: [17ff785b07a1 4771195745ef]
	I0717 02:11:42.570151   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 02:11:42.651604   10936 logs.go:276] 0 containers: []
	W0717 02:11:42.651664   10936 logs.go:278] No container was found matching "kindnet"
	I0717 02:11:42.666894   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 02:11:42.755379   10936 logs.go:276] 2 containers: [d21e9adcbaa5 ef8db1d0f6c0]
	I0717 02:11:42.768684   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 02:11:42.825255   10936 logs.go:276] 1 containers: [d2895cf887fb]
	I0717 02:11:42.825255   10936 logs.go:123] Gathering logs for etcd [0c11f86257e5] ...
	I0717 02:11:42.825255   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c11f86257e5"
	I0717 02:11:42.921224   10936 logs.go:123] Gathering logs for kube-proxy [0d09bed0c3c5] ...
	I0717 02:11:42.921224   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d09bed0c3c5"
	I0717 02:11:42.990780   10936 logs.go:123] Gathering logs for kubernetes-dashboard [d2895cf887fb] ...
	I0717 02:11:42.991010   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2895cf887fb"
	I0717 02:11:43.056038   10936 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:11:43.056038   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:11:43.692230   10936 logs.go:123] Gathering logs for kube-apiserver [f1fe02b4a78a] ...
	I0717 02:11:43.692322   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1fe02b4a78a"
	I0717 02:11:43.830115   10936 logs.go:123] Gathering logs for etcd [ba8f75033d8c] ...
	I0717 02:11:43.830115   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8f75033d8c"
	I0717 02:11:43.959253   10936 logs.go:123] Gathering logs for coredns [58914faf6333] ...
	I0717 02:11:43.959253   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58914faf6333"
	I0717 02:11:44.047944   10936 logs.go:123] Gathering logs for storage-provisioner [ef8db1d0f6c0] ...
	I0717 02:11:44.047944   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8db1d0f6c0"
	I0717 02:11:44.111462   10936 logs.go:123] Gathering logs for kubelet ...
	I0717 02:11:44.111542   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 02:11:44.233338   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:16 old-k8s-version-556100 kubelet[1879]: E0717 02:06:16.774645    1879 pod_workers.go:191] Error syncing pod a3e7be694ef7cf952503c5d331abc0ac ("kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"
	W0717 02:11:44.241766   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:17 old-k8s-version-556100 kubelet[1879]: E0717 02:06:17.382403    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.243396   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:18 old-k8s-version-556100 kubelet[1879]: E0717 02:06:18.474647    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.243833   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:19 old-k8s-version-556100 kubelet[1879]: E0717 02:06:19.063870    1879 pod_workers.go:191] Error syncing pod a3e7be694ef7cf952503c5d331abc0ac ("kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"
	W0717 02:11:44.243833   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:19 old-k8s-version-556100 kubelet[1879]: E0717 02:06:19.600033    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.248869   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:30 old-k8s-version-556100 kubelet[1879]: E0717 02:06:30.559826    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.250203   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:39 old-k8s-version-556100 kubelet[1879]: E0717 02:06:39.023865    1879 pod_workers.go:191] Error syncing pod a5a2df01-7a16-4e85-a81e-2c4dafdf61cc ("storage-provisioner_kube-system(a5a2df01-7a16-4e85-a81e-2c4dafdf61cc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5a2df01-7a16-4e85-a81e-2c4dafdf61cc)"
	W0717 02:11:44.250799   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:45 old-k8s-version-556100 kubelet[1879]: E0717 02:06:45.502107    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.256037   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:51 old-k8s-version-556100 kubelet[1879]: E0717 02:06:51.199878    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.259165   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:51 old-k8s-version-556100 kubelet[1879]: E0717 02:06:51.863868    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.259659   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:52 old-k8s-version-556100 kubelet[1879]: E0717 02:06:52.910168    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.261340   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:11 old-k8s-version-556100 kubelet[1879]: E0717 02:07:11.144160    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.267451   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:12 old-k8s-version-556100 kubelet[1879]: E0717 02:07:12.237780    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.268066   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:24 old-k8s-version-556100 kubelet[1879]: E0717 02:07:24.496177    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.268917   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:25 old-k8s-version-556100 kubelet[1879]: E0717 02:07:25.494854    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.269338   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:36 old-k8s-version-556100 kubelet[1879]: E0717 02:07:36.494911    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.273224   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:39 old-k8s-version-556100 kubelet[1879]: E0717 02:07:39.015113    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.274372   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:49 old-k8s-version-556100 kubelet[1879]: E0717 02:07:49.492826    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.274372   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:51 old-k8s-version-556100 kubelet[1879]: E0717 02:07:51.507447    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.275392   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:02 old-k8s-version-556100 kubelet[1879]: E0717 02:08:02.492712    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.280714   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:03 old-k8s-version-556100 kubelet[1879]: E0717 02:08:03.543474    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:14 old-k8s-version-556100 kubelet[1879]: E0717 02:08:14.495060    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:17 old-k8s-version-556100 kubelet[1879]: E0717 02:08:17.494318    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:28 old-k8s-version-556100 kubelet[1879]: E0717 02:08:28.489246    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:30 old-k8s-version-556100 kubelet[1879]: E0717 02:08:30.015337    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:42 old-k8s-version-556100 kubelet[1879]: E0717 02:08:42.489244    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:43 old-k8s-version-556100 kubelet[1879]: E0717 02:08:43.491537    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287230   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:53 old-k8s-version-556100 kubelet[1879]: E0717 02:08:53.487124    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287566   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:57 old-k8s-version-556100 kubelet[1879]: E0717 02:08:57.504573    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:07 old-k8s-version-556100 kubelet[1879]: E0717 02:09:07.486453    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:12 old-k8s-version-556100 kubelet[1879]: E0717 02:09:12.489970    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:20 old-k8s-version-556100 kubelet[1879]: E0717 02:09:20.489076    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.289059   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:24 old-k8s-version-556100 kubelet[1879]: E0717 02:09:24.501388    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.290351   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:35 old-k8s-version-556100 kubelet[1879]: E0717 02:09:35.538332    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.292601   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:38 old-k8s-version-556100 kubelet[1879]: E0717 02:09:38.485873    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.292601   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:47 old-k8s-version-556100 kubelet[1879]: E0717 02:09:47.491060    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.294438   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087904    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.294438   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.483792    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296270   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.540111    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296420   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:14 old-k8s-version-556100 kubelet[1879]: E0717 02:10:14.481266    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296420   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:15 old-k8s-version-556100 kubelet[1879]: E0717 02:10:15.482722    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297092   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:27 old-k8s-version-556100 kubelet[1879]: E0717 02:10:27.479567    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297364   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:30 old-k8s-version-556100 kubelet[1879]: E0717 02:10:30.485991    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297577   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:41 old-k8s-version-556100 kubelet[1879]: E0717 02:10:41.480505    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297832   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:43 old-k8s-version-556100 kubelet[1879]: E0717 02:10:43.478329    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298105   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:52 old-k8s-version-556100 kubelet[1879]: E0717 02:10:52.481369    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298544   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:54 old-k8s-version-556100 kubelet[1879]: E0717 02:10:54.482186    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298788   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:05 old-k8s-version-556100 kubelet[1879]: E0717 02:11:05.478059    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.300329   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.300571   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0717 02:11:44.300571   10936 logs.go:123] Gathering logs for coredns [a36c9df36a2e] ...
	I0717 02:11:44.300571   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36c9df36a2e"
	I0717 02:11:44.363330   10936 logs.go:123] Gathering logs for kube-scheduler [b5b115665dd7] ...
	I0717 02:11:44.363429   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5b115665dd7"
	I0717 02:11:44.431905   10936 logs.go:123] Gathering logs for container status ...
	I0717 02:11:44.432526   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:11:44.637938   10936 logs.go:123] Gathering logs for storage-provisioner [d21e9adcbaa5] ...
	I0717 02:11:44.638003   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d21e9adcbaa5"
	I0717 02:11:44.708242   10936 logs.go:123] Gathering logs for Docker ...
	I0717 02:11:44.708242   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 02:11:44.783595   10936 logs.go:123] Gathering logs for dmesg ...
	I0717 02:11:44.783595   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:11:44.834461   10936 logs.go:123] Gathering logs for kube-apiserver [e030f8ebfbdb] ...
	I0717 02:11:44.834461   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e030f8ebfbdb"
	I0717 02:11:45.050424   10936 logs.go:123] Gathering logs for kube-scheduler [66ed77ac46f7] ...
	I0717 02:11:45.050424   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ed77ac46f7"
	I0717 02:11:45.163078   10936 logs.go:123] Gathering logs for kube-proxy [f60d408f22bf] ...
	I0717 02:11:45.163237   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f60d408f22bf"
	I0717 02:11:45.258234   10936 logs.go:123] Gathering logs for kube-controller-manager [17ff785b07a1] ...
	I0717 02:11:45.258885   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17ff785b07a1"
	I0717 02:11:45.355712   10936 logs.go:123] Gathering logs for kube-controller-manager [4771195745ef] ...
	I0717 02:11:45.355712   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4771195745ef"
	I0717 02:11:45.471217   10936 out.go:304] Setting ErrFile to fd 956...
	I0717 02:11:45.471376   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 02:11:45.471649   10936 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0717 02:11:45.471762   10936 out.go:239]   Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471762   10936 out.go:239]   Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471824   10936 out.go:239]   Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471824   10936 out.go:239]   Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471904   10936 out.go:239]   Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0717 02:11:45.471904   10936 out.go:304] Setting ErrFile to fd 956...
	I0717 02:11:45.471904   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:11:55.507639   10936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:11:55.543889   10936 api_server.go:72] duration metric: took 6m10.2496857s to wait for apiserver process to appear ...
	I0717 02:11:55.543889   10936 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:11:55.685928   10936 out.go:177] 
	W0717 02:11:55.697074   10936 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0717 02:11:55.697142   10936 out.go:239] * 
	* 
	W0717 02:11:55.698514   10936 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:11:55.719474   10936 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-556100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-556100
helpers_test.go:235: (dbg) docker inspect old-k8s-version-556100:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709",
	        "Created": "2024-07-17T01:59:55.430446166Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301554,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-17T02:04:47.162419002Z",
	            "FinishedAt": "2024-07-17T02:04:39.237023998Z"
	        },
	        "Image": "sha256:b90fcd82d9a0f97666ccbedd0bec36ffa6ae451ed5f5fff480c00361af0818c6",
	        "ResolvConfPath": "/var/lib/docker/containers/60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709/hostname",
	        "HostsPath": "/var/lib/docker/containers/60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709/hosts",
	        "LogPath": "/var/lib/docker/containers/60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709/60e7f81ee0854f1281583786996b658a0700297fa7c2830ff4b79c4c4b719709-json.log",
	        "Name": "/old-k8s-version-556100",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-556100:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-556100",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ca149a2cee66b63551a696a2bbbda8a0e71f293a895224d0e9772f9b4c4fe9c5-init/diff:/var/lib/docker/overlay2/6088a4728183ef5756e13b25ed8f3f4eadd6ab8d4c2088bd541d2084f39281eb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ca149a2cee66b63551a696a2bbbda8a0e71f293a895224d0e9772f9b4c4fe9c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ca149a2cee66b63551a696a2bbbda8a0e71f293a895224d0e9772f9b4c4fe9c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ca149a2cee66b63551a696a2bbbda8a0e71f293a895224d0e9772f9b4c4fe9c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-556100",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-556100/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-556100",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-556100",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-556100",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "600874e9106c39d09c26cf2d756470d58401558c7c7e4fd0c691d1fbbca9949c",
	            "SandboxKey": "/var/run/docker/netns/600874e9106c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51025"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51026"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51027"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51028"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51029"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-556100": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "63d6e37d5dcea0b1df177a2bd0b9ec61290cbc04a805ccaa7ed71165211d4e26",
	                    "EndpointID": "3f00987583e522108356076f9abe6d7a5bc4ebf82d6bf12f71f577139cf79b7a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-556100",
	                        "60e7f81ee085"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-556100 -n old-k8s-version-556100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-556100 -n old-k8s-version-556100: (1.7434505s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-556100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-556100 logs -n 25: (5.6561386s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p no-preload-096400 --memory=2200                     | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:04 UTC | 17 Jul 24 02:09 UTC |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --preload=false --driver=docker                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-556100             | old-k8s-version-556100       | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:04 UTC | 17 Jul 24 02:04 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p old-k8s-version-556100                              | old-k8s-version-556100       | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:04 UTC |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --kvm-network=default                                  |                              |                   |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |                   |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |                   |         |                     |                     |
	|         | --keep-context=false                                   |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-970500  | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:04 UTC | 17 Jul 24 02:04 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:04 UTC | 17 Jul 24 02:05 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-970500       | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:05 UTC | 17 Jul 24 02:05 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:05 UTC | 17 Jul 24 02:10 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2                           |                              |                   |         |                     |                     |
	| image   | embed-certs-395100 image list                          | embed-certs-395100           | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC | 17 Jul 24 02:09 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-395100                                  | embed-certs-395100           | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC | 17 Jul 24 02:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-395100                                  | embed-certs-395100           | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC | 17 Jul 24 02:09 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-395100                                  | embed-certs-395100           | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC | 17 Jul 24 02:09 UTC |
	| delete  | -p embed-certs-395100                                  | embed-certs-395100           | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC | 17 Jul 24 02:09 UTC |
	| start   | -p newest-cni-861300 --memory=2200 --alsologtostderr   | newest-cni-861300            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:09 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.0-beta.0    |                              |                   |         |                     |                     |
	| image   | no-preload-096400 image list                           | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p no-preload-096400                                   | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p no-preload-096400                                   | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p no-preload-096400                                   | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	| delete  | -p no-preload-096400                                   | no-preload-096400            | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	| start   | -p auto-901900 --memory=3072                           | auto-901900                  | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-970500                           | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:10 UTC | 17 Jul 24 02:10 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:11 UTC | 17 Jul 24 02:11 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:11 UTC | 17 Jul 24 02:11 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-970500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:11 UTC | 17 Jul 24 02:11 UTC |
	|         | default-k8s-diff-port-970500                           |                              |                   |         |                     |                     |
	| start   | -p kindnet-901900                                      | kindnet-901900               | minikube3\jenkins | v1.33.1 | 17 Jul 24 02:11 UTC |                     |
	|         | --memory=3072                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |                   |         |                     |                     |
	|         | --cni=kindnet --driver=docker                          |                              |                   |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 02:11:29
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 02:11:29.860207    8116 out.go:291] Setting OutFile to fd 1892 ...
	I0717 02:11:29.860776    8116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:11:29.860776    8116 out.go:304] Setting ErrFile to fd 2044...
	I0717 02:11:29.860839    8116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:11:29.894183    8116 out.go:298] Setting JSON to false
	I0717 02:11:29.904297    8116 start.go:129] hostinfo: {"hostname":"minikube3","uptime":14304,"bootTime":1721167984,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 02:11:29.904297    8116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 02:11:29.914458    8116 out.go:177] * [kindnet-901900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 02:11:29.919436    8116 notify.go:220] Checking for updates...
	I0717 02:11:29.924726    8116 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 02:11:29.935585    8116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 02:11:29.944141    8116 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 02:11:29.954867    8116 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 02:11:29.967296    8116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 02:11:27.023453   13868 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-901900 --name auto-901900 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-901900 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-901900 --network auto-901900 --ip 192.168.94.2 --volume auto-901900:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e: (3.9767879s)
	I0717 02:11:27.049234   13868 cli_runner.go:164] Run: docker container inspect auto-901900 --format={{.State.Running}}
	I0717 02:11:27.328327   13868 cli_runner.go:164] Run: docker container inspect auto-901900 --format={{.State.Status}}
	I0717 02:11:27.602007   13868 cli_runner.go:164] Run: docker exec auto-901900 stat /var/lib/dpkg/alternatives/iptables
	I0717 02:11:28.009589   13868 oci.go:144] the created container "auto-901900" has a running status.
	I0717 02:11:28.009589   13868 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa...
	I0717 02:11:28.375611   13868 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 02:11:28.719288   13868 cli_runner.go:164] Run: docker container inspect auto-901900 --format={{.State.Status}}
	I0717 02:11:28.996820   13868 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 02:11:28.996863   13868 kic_runner.go:114] Args: [docker exec --privileged auto-901900 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 02:11:29.376689   13868 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa...
	I0717 02:11:29.973434    8116 config.go:182] Loaded profile config "auto-901900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 02:11:29.974457    8116 config.go:182] Loaded profile config "newest-cni-861300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0717 02:11:29.974799    8116 config.go:182] Loaded profile config "old-k8s-version-556100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0717 02:11:29.974799    8116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 02:11:30.409013    8116 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 02:11:30.422511    8116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 02:11:30.886547    8116 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-17 02:11:30.828286833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 02:11:30.896497    8116 out.go:177] * Using the docker driver based on user configuration
	I0717 02:11:30.898889    8116 start.go:297] selected driver: docker
	I0717 02:11:30.898889    8116 start.go:901] validating driver "docker" against <nil>
	I0717 02:11:30.898889    8116 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 02:11:31.005897    8116 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 02:11:31.459549    8116 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:97 SystemTime:2024-07-17 02:11:31.398452179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 02:11:31.459549    8116 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 02:11:31.463017    8116 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 02:11:31.467914    8116 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 02:11:31.472483    8116 cni.go:84] Creating CNI manager for "kindnet"
	I0717 02:11:31.472483    8116 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 02:11:31.472483    8116 start.go:340] cluster config:
	{Name:kindnet-901900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-901900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:11:31.476244    8116 out.go:177] * Starting "kindnet-901900" primary control-plane node in "kindnet-901900" cluster
	I0717 02:11:31.483053    8116 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 02:11:31.488281    8116 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
	I0717 02:11:29.292866   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:31.503703   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:31.493608    8116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 02:11:31.493608    8116 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 02:11:31.493608    8116 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 02:11:31.493608    8116 cache.go:56] Caching tarball of preloaded images
	I0717 02:11:31.494346    8116 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 02:11:31.494574    8116 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0717 02:11:31.495003    8116 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-901900\config.json ...
	I0717 02:11:31.495040    8116 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-901900\config.json: {Name:mk07d62e7e33304bd832be31547b993d6cd216aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0717 02:11:31.761657    8116 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e is of wrong architecture
	I0717 02:11:31.761755    8116 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 02:11:31.761828    8116 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:11:31.762078    8116 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:11:31.762147    8116 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 02:11:31.762420    8116 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 02:11:31.762527    8116 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 02:11:31.762658    8116 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	I0717 02:11:31.762658    8116 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
	I0717 02:11:31.762718    8116 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 02:11:32.712861    8116 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
	I0717 02:11:32.712861    8116 cache.go:194] Successfully downloaded all kic artifacts
	I0717 02:11:32.712861    8116 start.go:360] acquireMachinesLock for kindnet-901900: {Name:mkaa1c1c576da561b6e76ea21d0f39ffe42bcc1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 02:11:32.713628    8116 start.go:364] duration metric: took 714µs to acquireMachinesLock for "kindnet-901900"
	I0717 02:11:32.713628    8116 start.go:93] Provisioning new machine with config: &{Name:kindnet-901900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:kindnet-901900 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 02:11:32.713628    8116 start.go:125] createHost starting for "" (driver="docker")
	I0717 02:11:27.954755    9156 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 02:11:28.051478    9156 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7712.pem --> /usr/share/ca-certificates/7712.pem (1338 bytes)
	I0717 02:11:28.129026    9156 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /usr/share/ca-certificates/77122.pem (1708 bytes)
	I0717 02:11:28.230252    9156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 02:11:28.311110    9156 ssh_runner.go:195] Run: openssl version
	I0717 02:11:28.365135    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 02:11:28.422468    9156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:11:28.449887    9156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:11:28.468170    9156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 02:11:28.522687    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 02:11:28.590969    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7712.pem && ln -fs /usr/share/ca-certificates/7712.pem /etc/ssl/certs/7712.pem"
	I0717 02:11:28.655868    9156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7712.pem
	I0717 02:11:28.681947    9156 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 00:42 /usr/share/ca-certificates/7712.pem
	I0717 02:11:28.702289    9156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7712.pem
	I0717 02:11:28.761868    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7712.pem /etc/ssl/certs/51391683.0"
	I0717 02:11:29.016431    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77122.pem && ln -fs /usr/share/ca-certificates/77122.pem /etc/ssl/certs/77122.pem"
	I0717 02:11:29.072074    9156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77122.pem
	I0717 02:11:29.089005    9156 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 00:42 /usr/share/ca-certificates/77122.pem
	I0717 02:11:29.110662    9156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77122.pem
	I0717 02:11:29.169031    9156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77122.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 02:11:29.214868    9156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0717 02:11:29.242037    9156 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0717 02:11:29.242037    9156 kubeadm.go:392] StartCluster: {Name:newest-cni-861300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-861300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 02:11:29.260301    9156 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 02:11:29.387328    9156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 02:11:29.467398    9156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 02:11:29.512891    9156 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0717 02:11:29.539113    9156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 02:11:29.580068    9156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 02:11:29.580068    9156 kubeadm.go:157] found existing configuration files:
	
	I0717 02:11:29.608222    9156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 02:11:29.651697    9156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0717 02:11:29.679235    9156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0717 02:11:29.745543    9156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 02:11:29.782626    9156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0717 02:11:29.800953    9156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0717 02:11:29.854714    9156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 02:11:29.893794    9156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0717 02:11:29.914458    9156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 02:11:29.979921    9156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 02:11:30.023715    9156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0717 02:11:30.050006    9156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 02:11:30.088736    9156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 02:11:30.235723    9156 kubeadm.go:310] W0717 02:11:30.232944    2018 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:11:30.238373    9156 kubeadm.go:310] W0717 02:11:30.235287    2018 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0717 02:11:30.333668    9156 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0717 02:11:30.561253    9156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 02:11:32.719088    8116 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0717 02:11:32.719758    8116 start.go:159] libmachine.API.Create for "kindnet-901900" (driver="docker")
	I0717 02:11:32.719929    8116 client.go:168] LocalClient.Create starting
	I0717 02:11:32.720686    8116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0717 02:11:32.720686    8116 main.go:141] libmachine: Decoding PEM data...
	I0717 02:11:32.720686    8116 main.go:141] libmachine: Parsing certificate...
	I0717 02:11:32.721334    8116 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0717 02:11:32.721651    8116 main.go:141] libmachine: Decoding PEM data...
	I0717 02:11:32.721651    8116 main.go:141] libmachine: Parsing certificate...
	I0717 02:11:32.735798    8116 cli_runner.go:164] Run: docker network inspect kindnet-901900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 02:11:32.970820    8116 cli_runner.go:211] docker network inspect kindnet-901900 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 02:11:32.988761    8116 network_create.go:284] running [docker network inspect kindnet-901900] to gather additional debugging logs...
	I0717 02:11:32.988761    8116 cli_runner.go:164] Run: docker network inspect kindnet-901900
	W0717 02:11:33.227997    8116 cli_runner.go:211] docker network inspect kindnet-901900 returned with exit code 1
	I0717 02:11:33.227997    8116 network_create.go:287] error running [docker network inspect kindnet-901900]: docker network inspect kindnet-901900: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-901900 not found
	I0717 02:11:33.227997    8116 network_create.go:289] output of [docker network inspect kindnet-901900]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-901900 not found
	
	** /stderr **
	I0717 02:11:33.248336    8116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 02:11:33.519479    8116 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.549823    8116 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.581245    8116 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.627634    8116 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.657933    8116 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.704169    8116 network.go:209] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 02:11:33.753336    8116 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017a88d0}
	I0717 02:11:33.753872    8116 network_create.go:124] attempt to create docker network kindnet-901900 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0717 02:11:33.765153    8116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-901900 kindnet-901900
	I0717 02:11:34.162974    8116 network_create.go:108] docker network kindnet-901900 192.168.103.0/24 created
	I0717 02:11:34.162974    8116 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-901900" container
	I0717 02:11:34.200990    8116 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 02:11:34.463261    8116 cli_runner.go:164] Run: docker volume create kindnet-901900 --label name.minikube.sigs.k8s.io=kindnet-901900 --label created_by.minikube.sigs.k8s.io=true
	I0717 02:11:34.716627    8116 oci.go:103] Successfully created a docker volume kindnet-901900
	I0717 02:11:34.729829    8116 cli_runner.go:164] Run: docker run --rm --name kindnet-901900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-901900 --entrypoint /usr/bin/test -v kindnet-901900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib
	I0717 02:11:33.032548   13868 cli_runner.go:164] Run: docker container inspect auto-901900 --format={{.State.Status}}
	I0717 02:11:33.275554   13868 machine.go:94] provisionDockerMachine start ...
	I0717 02:11:33.292881   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:33.540622   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:33.551755   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:33.551755   13868 main.go:141] libmachine: About to run SSH command:
	hostname
	I0717 02:11:33.800305   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-901900
	
	I0717 02:11:33.800305   13868 ubuntu.go:169] provisioning hostname "auto-901900"
	I0717 02:11:33.817056   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:34.063660   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:34.064474   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:34.064532   13868 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-901900 && echo "auto-901900" | sudo tee /etc/hostname
	I0717 02:11:34.368537   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-901900
	
	I0717 02:11:34.387241   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:34.647917   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:34.649138   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:34.649201   13868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-901900' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-901900/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-901900' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 02:11:34.882563   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 02:11:34.882563   13868 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0717 02:11:34.882563   13868 ubuntu.go:177] setting up certificates
	I0717 02:11:34.882563   13868 provision.go:84] configureAuth start
	I0717 02:11:34.899541   13868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-901900
	I0717 02:11:33.991793   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:36.002429   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:38.586663    8116 cli_runner.go:217] Completed: docker run --rm --name kindnet-901900-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-901900 --entrypoint /usr/bin/test -v kindnet-901900:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib: (3.8568003s)
	I0717 02:11:38.586663    8116 oci.go:107] Successfully prepared a docker volume kindnet-901900
	I0717 02:11:38.586663    8116 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 02:11:38.586663    8116 kic.go:194] Starting extracting preloaded images to volume ...
	I0717 02:11:38.600400    8116 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-901900:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 02:11:35.140404   13868 provision.go:143] copyHostCerts
	I0717 02:11:35.141004   13868 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0717 02:11:35.141070   13868 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0717 02:11:35.141507   13868 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0717 02:11:35.142679   13868 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0717 02:11:35.142679   13868 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0717 02:11:35.143559   13868 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0717 02:11:35.144937   13868 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0717 02:11:35.145015   13868 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0717 02:11:35.145657   13868 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0717 02:11:35.146981   13868 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.auto-901900 san=[127.0.0.1 192.168.94.2 auto-901900 localhost minikube]
	I0717 02:11:35.717970   13868 provision.go:177] copyRemoteCerts
	I0717 02:11:35.728595   13868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 02:11:35.748625   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:35.970794   13868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51448 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa Username:docker}
	I0717 02:11:36.150608   13868 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 02:11:36.207902   13868 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1204 bytes)
	I0717 02:11:36.268880   13868 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 02:11:36.331329   13868 provision.go:87] duration metric: took 1.4487533s to configureAuth
	I0717 02:11:36.331329   13868 ubuntu.go:193] setting minikube options for container-runtime
	I0717 02:11:36.331972   13868 config.go:182] Loaded profile config "auto-901900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 02:11:36.343555   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:36.560502   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:36.561405   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:36.561580   13868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 02:11:36.796968   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 02:11:36.796968   13868 ubuntu.go:71] root file system type: overlay
	I0717 02:11:36.797228   13868 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 02:11:36.810291   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:37.022494   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:37.022789   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:37.022789   13868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 02:11:37.423978   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 02:11:37.439785   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:37.667328   13868 main.go:141] libmachine: Using SSH client type: native
	I0717 02:11:37.667328   13868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x125a9e0] 0x125d5c0 <nil>  [] 0s} 127.0.0.1 51448 <nil> <nil>}
	I0717 02:11:37.668379   13868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 02:11:38.011901   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:40.491992   10936 pod_ready.go:102] pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace has status "Ready":"False"
	I0717 02:11:41.974141   10936 pod_ready.go:81] duration metric: took 4m0.0070389s for pod "metrics-server-9975d5f86-bj56w" in "kube-system" namespace to be "Ready" ...
	E0717 02:11:41.974141   10936 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 02:11:41.974141   10936 pod_ready.go:38] duration metric: took 5m38.4048158s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 02:11:41.974303   10936 api_server.go:52] waiting for apiserver process to appear ...
	I0717 02:11:41.985881   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 02:11:40.628256   13868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-06-29 00:00:53.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-07-17 02:11:37.410469825 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 02:11:40.628308   13868 machine.go:97] duration metric: took 7.3525931s to provisionDockerMachine
	I0717 02:11:40.628308   13868 client.go:171] duration metric: took 48.188718s to LocalClient.Create
	I0717 02:11:40.628451   13868 start.go:167] duration metric: took 48.1894798s to libmachine.API.Create "auto-901900"
	I0717 02:11:40.628498   13868 start.go:293] postStartSetup for "auto-901900" (driver="docker")
	I0717 02:11:40.628553   13868 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 02:11:40.653095   13868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 02:11:40.667878   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:40.889008   13868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51448 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa Username:docker}
	I0717 02:11:41.084269   13868 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 02:11:41.107155   13868 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 02:11:41.107273   13868 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 02:11:41.107317   13868 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 02:11:41.107317   13868 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0717 02:11:41.107369   13868 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0717 02:11:41.107654   13868 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0717 02:11:41.109016   13868 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem -> 77122.pem in /etc/ssl/certs
	I0717 02:11:41.124772   13868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 02:11:41.154773   13868 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77122.pem --> /etc/ssl/certs/77122.pem (1708 bytes)
	I0717 02:11:41.219051   13868 start.go:296] duration metric: took 590.5482ms for postStartSetup
	I0717 02:11:41.242808   13868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-901900
	I0717 02:11:41.462768   13868 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\config.json ...
	I0717 02:11:41.487294   13868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 02:11:41.512105   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:41.721780   13868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51448 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa Username:docker}
	I0717 02:11:41.884772   13868 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 02:11:41.900842   13868 start.go:128] duration metric: took 49.4708251s to createHost
	I0717 02:11:41.900842   13868 start.go:83] releasing machines lock for "auto-901900", held for 49.4723673s
	I0717 02:11:41.913083   13868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-901900
	I0717 02:11:42.129526   13868 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0717 02:11:42.150142   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:42.156746   13868 ssh_runner.go:195] Run: cat /version.json
	I0717 02:11:42.185838   13868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-901900
	I0717 02:11:42.402985   13868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51448 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa Username:docker}
	I0717 02:11:42.433863   13868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51448 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\auto-901900\id_rsa Username:docker}
	W0717 02:11:42.559118   13868 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0717 02:11:42.614666   13868 ssh_runner.go:195] Run: systemctl --version
	I0717 02:11:42.652335   13868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 02:11:42.667722   13868 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0717 02:11:42.667722   13868 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0717 02:11:42.695641   13868 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0717 02:11:42.732233   13868 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0717 02:11:42.751483   13868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 02:11:43.517548   13868 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 02:11:43.517548   13868 start.go:495] detecting cgroup driver to use...
	I0717 02:11:43.517655   13868 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 02:11:43.517847   13868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 02:11:43.571724   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 02:11:43.625387   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 02:11:43.691581   13868 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 02:11:43.708790   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 02:11:43.767137   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 02:11:43.827388   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 02:11:43.879396   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 02:11:43.936250   13868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 02:11:43.994642   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 02:11:44.047944   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0717 02:11:44.090903   13868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0717 02:11:44.150110   13868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 02:11:44.205364   13868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 02:11:44.260087   13868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:11:44.498849   13868 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 02:11:44.812209   13868 start.go:495] detecting cgroup driver to use...
	I0717 02:11:44.812850   13868 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0717 02:11:44.842010   13868 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 02:11:44.881056   13868 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0717 02:11:44.900263   13868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 02:11:44.968093   13868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 02:11:42.065477   10936 logs.go:276] 2 containers: [f1fe02b4a78a e030f8ebfbdb]
	I0717 02:11:42.078231   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 02:11:42.151268   10936 logs.go:276] 2 containers: [0c11f86257e5 ba8f75033d8c]
	I0717 02:11:42.172374   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 02:11:42.247650   10936 logs.go:276] 2 containers: [a36c9df36a2e 58914faf6333]
	I0717 02:11:42.261655   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 02:11:42.349030   10936 logs.go:276] 2 containers: [b5b115665dd7 66ed77ac46f7]
	I0717 02:11:42.365722   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 02:11:42.465419   10936 logs.go:276] 2 containers: [f60d408f22bf 0d09bed0c3c5]
	I0717 02:11:42.483868   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 02:11:42.555634   10936 logs.go:276] 2 containers: [17ff785b07a1 4771195745ef]
	I0717 02:11:42.570151   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 02:11:42.651604   10936 logs.go:276] 0 containers: []
	W0717 02:11:42.651664   10936 logs.go:278] No container was found matching "kindnet"
	I0717 02:11:42.666894   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0717 02:11:42.755379   10936 logs.go:276] 2 containers: [d21e9adcbaa5 ef8db1d0f6c0]
	I0717 02:11:42.768684   10936 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 02:11:42.825255   10936 logs.go:276] 1 containers: [d2895cf887fb]
	I0717 02:11:42.825255   10936 logs.go:123] Gathering logs for etcd [0c11f86257e5] ...
	I0717 02:11:42.825255   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0c11f86257e5"
	I0717 02:11:42.921224   10936 logs.go:123] Gathering logs for kube-proxy [0d09bed0c3c5] ...
	I0717 02:11:42.921224   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d09bed0c3c5"
	I0717 02:11:42.990780   10936 logs.go:123] Gathering logs for kubernetes-dashboard [d2895cf887fb] ...
	I0717 02:11:42.991010   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2895cf887fb"
	I0717 02:11:43.056038   10936 logs.go:123] Gathering logs for describe nodes ...
	I0717 02:11:43.056038   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0717 02:11:43.692230   10936 logs.go:123] Gathering logs for kube-apiserver [f1fe02b4a78a] ...
	I0717 02:11:43.692322   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1fe02b4a78a"
	I0717 02:11:43.830115   10936 logs.go:123] Gathering logs for etcd [ba8f75033d8c] ...
	I0717 02:11:43.830115   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba8f75033d8c"
	I0717 02:11:43.959253   10936 logs.go:123] Gathering logs for coredns [58914faf6333] ...
	I0717 02:11:43.959253   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 58914faf6333"
	I0717 02:11:44.047944   10936 logs.go:123] Gathering logs for storage-provisioner [ef8db1d0f6c0] ...
	I0717 02:11:44.047944   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef8db1d0f6c0"
	I0717 02:11:44.111462   10936 logs.go:123] Gathering logs for kubelet ...
	I0717 02:11:44.111542   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0717 02:11:44.233338   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:16 old-k8s-version-556100 kubelet[1879]: E0717 02:06:16.774645    1879 pod_workers.go:191] Error syncing pod a3e7be694ef7cf952503c5d331abc0ac ("kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"
	W0717 02:11:44.241766   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:17 old-k8s-version-556100 kubelet[1879]: E0717 02:06:17.382403    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.243396   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:18 old-k8s-version-556100 kubelet[1879]: E0717 02:06:18.474647    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.243833   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:19 old-k8s-version-556100 kubelet[1879]: E0717 02:06:19.063870    1879 pod_workers.go:191] Error syncing pod a3e7be694ef7cf952503c5d331abc0ac ("kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-556100_kube-system(a3e7be694ef7cf952503c5d331abc0ac)"
	W0717 02:11:44.243833   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:19 old-k8s-version-556100 kubelet[1879]: E0717 02:06:19.600033    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.248869   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:30 old-k8s-version-556100 kubelet[1879]: E0717 02:06:30.559826    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.250203   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:39 old-k8s-version-556100 kubelet[1879]: E0717 02:06:39.023865    1879 pod_workers.go:191] Error syncing pod a5a2df01-7a16-4e85-a81e-2c4dafdf61cc ("storage-provisioner_kube-system(a5a2df01-7a16-4e85-a81e-2c4dafdf61cc)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a5a2df01-7a16-4e85-a81e-2c4dafdf61cc)"
	W0717 02:11:44.250799   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:45 old-k8s-version-556100 kubelet[1879]: E0717 02:06:45.502107    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.256037   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:51 old-k8s-version-556100 kubelet[1879]: E0717 02:06:51.199878    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.259165   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:51 old-k8s-version-556100 kubelet[1879]: E0717 02:06:51.863868    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.259659   10936 logs.go:138] Found kubelet problem: Jul 17 02:06:52 old-k8s-version-556100 kubelet[1879]: E0717 02:06:52.910168    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.261340   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:11 old-k8s-version-556100 kubelet[1879]: E0717 02:07:11.144160    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.267451   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:12 old-k8s-version-556100 kubelet[1879]: E0717 02:07:12.237780    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.268066   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:24 old-k8s-version-556100 kubelet[1879]: E0717 02:07:24.496177    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.268917   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:25 old-k8s-version-556100 kubelet[1879]: E0717 02:07:25.494854    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.269338   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:36 old-k8s-version-556100 kubelet[1879]: E0717 02:07:36.494911    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.273224   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:39 old-k8s-version-556100 kubelet[1879]: E0717 02:07:39.015113    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.274372   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:49 old-k8s-version-556100 kubelet[1879]: E0717 02:07:49.492826    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.274372   10936 logs.go:138] Found kubelet problem: Jul 17 02:07:51 old-k8s-version-556100 kubelet[1879]: E0717 02:07:51.507447    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.275392   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:02 old-k8s-version-556100 kubelet[1879]: E0717 02:08:02.492712    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.280714   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:03 old-k8s-version-556100 kubelet[1879]: E0717 02:08:03.543474    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:14 old-k8s-version-556100 kubelet[1879]: E0717 02:08:14.495060    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:17 old-k8s-version-556100 kubelet[1879]: E0717 02:08:17.494318    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.281044   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:28 old-k8s-version-556100 kubelet[1879]: E0717 02:08:28.489246    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:30 old-k8s-version-556100 kubelet[1879]: E0717 02:08:30.015337    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:42 old-k8s-version-556100 kubelet[1879]: E0717 02:08:42.489244    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.286124   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:43 old-k8s-version-556100 kubelet[1879]: E0717 02:08:43.491537    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287230   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:53 old-k8s-version-556100 kubelet[1879]: E0717 02:08:53.487124    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287566   10936 logs.go:138] Found kubelet problem: Jul 17 02:08:57 old-k8s-version-556100 kubelet[1879]: E0717 02:08:57.504573    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:07 old-k8s-version-556100 kubelet[1879]: E0717 02:09:07.486453    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:12 old-k8s-version-556100 kubelet[1879]: E0717 02:09:12.489970    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.287944   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:20 old-k8s-version-556100 kubelet[1879]: E0717 02:09:20.489076    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.289059   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:24 old-k8s-version-556100 kubelet[1879]: E0717 02:09:24.501388    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.290351   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:35 old-k8s-version-556100 kubelet[1879]: E0717 02:09:35.538332    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0717 02:11:44.292601   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:38 old-k8s-version-556100 kubelet[1879]: E0717 02:09:38.485873    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.292601   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:47 old-k8s-version-556100 kubelet[1879]: E0717 02:09:47.491060    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.294438   10936 logs.go:138] Found kubelet problem: Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087904    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0717 02:11:44.294438   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.483792    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296270   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.540111    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296420   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:14 old-k8s-version-556100 kubelet[1879]: E0717 02:10:14.481266    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.296420   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:15 old-k8s-version-556100 kubelet[1879]: E0717 02:10:15.482722    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297092   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:27 old-k8s-version-556100 kubelet[1879]: E0717 02:10:27.479567    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297364   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:30 old-k8s-version-556100 kubelet[1879]: E0717 02:10:30.485991    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297577   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:41 old-k8s-version-556100 kubelet[1879]: E0717 02:10:41.480505    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.297832   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:43 old-k8s-version-556100 kubelet[1879]: E0717 02:10:43.478329    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298105   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:52 old-k8s-version-556100 kubelet[1879]: E0717 02:10:52.481369    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298544   10936 logs.go:138] Found kubelet problem: Jul 17 02:10:54 old-k8s-version-556100 kubelet[1879]: E0717 02:10:54.482186    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.298788   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:05 old-k8s-version-556100 kubelet[1879]: E0717 02:11:05.478059    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.299131   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.300329   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:44.300571   10936 logs.go:138] Found kubelet problem: Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0717 02:11:44.300571   10936 logs.go:123] Gathering logs for coredns [a36c9df36a2e] ...
	I0717 02:11:44.300571   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a36c9df36a2e"
	I0717 02:11:44.363330   10936 logs.go:123] Gathering logs for kube-scheduler [b5b115665dd7] ...
	I0717 02:11:44.363429   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5b115665dd7"
	I0717 02:11:44.431905   10936 logs.go:123] Gathering logs for container status ...
	I0717 02:11:44.432526   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 02:11:44.637938   10936 logs.go:123] Gathering logs for storage-provisioner [d21e9adcbaa5] ...
	I0717 02:11:44.638003   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d21e9adcbaa5"
	I0717 02:11:44.708242   10936 logs.go:123] Gathering logs for Docker ...
	I0717 02:11:44.708242   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 02:11:44.783595   10936 logs.go:123] Gathering logs for dmesg ...
	I0717 02:11:44.783595   10936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 02:11:44.834461   10936 logs.go:123] Gathering logs for kube-apiserver [e030f8ebfbdb] ...
	I0717 02:11:44.834461   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e030f8ebfbdb"
	I0717 02:11:45.050424   10936 logs.go:123] Gathering logs for kube-scheduler [66ed77ac46f7] ...
	I0717 02:11:45.050424   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 66ed77ac46f7"
	I0717 02:11:45.163078   10936 logs.go:123] Gathering logs for kube-proxy [f60d408f22bf] ...
	I0717 02:11:45.163237   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f60d408f22bf"
	I0717 02:11:45.258234   10936 logs.go:123] Gathering logs for kube-controller-manager [17ff785b07a1] ...
	I0717 02:11:45.258885   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17ff785b07a1"
	I0717 02:11:45.355712   10936 logs.go:123] Gathering logs for kube-controller-manager [4771195745ef] ...
	I0717 02:11:45.355712   10936 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4771195745ef"
	I0717 02:11:45.471217   10936 out.go:304] Setting ErrFile to fd 956...
	I0717 02:11:45.471376   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 02:11:45.471649   10936 out.go:239] X Problems detected in kubelet:
	W0717 02:11:45.471762   10936 out.go:239]   Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471762   10936 out.go:239]   Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471824   10936 out.go:239]   Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471824   10936 out.go:239]   Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0717 02:11:45.471904   10936 out.go:239]   Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0717 02:11:45.471904   10936 out.go:304] Setting ErrFile to fd 956...
	I0717 02:11:45.471904   10936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 02:11:45.057011   13868 ssh_runner.go:195] Run: which cri-dockerd
	I0717 02:11:45.093574   13868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 02:11:45.127867   13868 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 02:11:45.214740   13868 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 02:11:45.488726   13868 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 02:11:45.650978   13868 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 02:11:45.650978   13868 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0717 02:11:45.718365   13868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 02:11:45.943684   13868 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 02:11:55.507639   10936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 02:11:55.543889   10936 api_server.go:72] duration metric: took 6m10.2496857s to wait for apiserver process to appear ...
	I0717 02:11:55.543889   10936 api_server.go:88] waiting for apiserver healthz status ...
	I0717 02:11:55.685928   10936 out.go:177] 
	W0717 02:11:55.697074   10936 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0717 02:11:55.697142   10936 out.go:239] * 
	W0717 02:11:55.698514   10936 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 02:11:55.719474   10936 out.go:177] 
	
	
	==> Docker <==
	Jul 17 02:07:09 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:09.699403221Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:07:09 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:09.699579442Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:07:11 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:11.142351963Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:07:11 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:11.533712379Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:12 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:12.223567301Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:12 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:12.223721918Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:12 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:12.223765023Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 17 02:07:38 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:38.771030568Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:39 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:39.003846033Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:39 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:39.004021653Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:07:39 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:07:39.004061757Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 17 02:08:03 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:03.533983229Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:08:03 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:03.534264560Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:08:03 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:03.542312058Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:08:29 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:29.768335637Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:08:30 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:30.002778660Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:08:30 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:30.003182004Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:08:30 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:08:30.003839576Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 17 02:09:35 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:35.525843866Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:09:35 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:35.525965981Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:09:35 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:35.535303382Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Jul 17 02:09:50 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:50.819039090Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:09:51 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:51.078104359Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:09:51 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:51.078344885Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Jul 17 02:09:51 old-k8s-version-556100 dockerd[1446]: time="2024-07-17T02:09:51.078390090Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2895cf887fbe       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        4 minutes ago       Running             kubernetes-dashboard      0                   ae610eda24536       kubernetes-dashboard-cd95d586-4c4d4
	d21e9adcbaa58       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   abde6721e94a9       storage-provisioner
	17ff785b07a1a       b9fa1895dcaa6                                                                                         5 minutes ago       Running             kube-controller-manager   2                   3a63a4353de0d       kube-controller-manager-old-k8s-version-556100
	0630ce47e05d6       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   b5a04179450dc       busybox
	a36c9df36a2e6       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   b1c2a68494bc1       coredns-74ff55c5b-dmjwg
	ef8db1d0f6c0c       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   abde6721e94a9       storage-provisioner
	f60d408f22bf0       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   547ed0adeb770       kube-proxy-bjvpb
	0c11f86257e55       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   37373c53124ff       etcd-old-k8s-version-556100
	f1fe02b4a78aa       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   98b7f0eb5445e       kube-apiserver-old-k8s-version-556100
	b5b115665dd71       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   8830402d53cda       kube-scheduler-old-k8s-version-556100
	4771195745efb       b9fa1895dcaa6                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   3a63a4353de0d       kube-controller-manager-old-k8s-version-556100
	5942096e922e6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   344c3809f458b       busybox
	58914faf6333c       bfe3a36ebd252                                                                                         9 minutes ago       Exited              coredns                   0                   75a83b30dc0a9       coredns-74ff55c5b-dmjwg
	0d09bed0c3c57       10cc881966cfd                                                                                         9 minutes ago       Exited              kube-proxy                0                   3e1997c4378f8       kube-proxy-bjvpb
	ba8f75033d8cd       0369cf4303ffd                                                                                         10 minutes ago      Exited              etcd                      0                   0217ac024f51f       etcd-old-k8s-version-556100
	66ed77ac46f75       3138b6e3d4712                                                                                         10 minutes ago      Exited              kube-scheduler            0                   3ad7cdf2525ab       kube-scheduler-old-k8s-version-556100
	e030f8ebfbdb1       ca9843d3b5454                                                                                         10 minutes ago      Exited              kube-apiserver            0                   c60a812a46996       kube-apiserver-old-k8s-version-556100
	
	
	==> coredns [58914faf6333] <==
	I0717 02:02:57.949575       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 02:02:36.898958721 +0000 UTC m=+0.109496755) (total time: 21.052775488s):
	Trace[2019727887]: [21.052775488s] [21.052775488s] END
	I0717 02:02:57.949753       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 02:02:36.898833307 +0000 UTC m=+0.109371341) (total time: 21.05313823s):
	Trace[939984059]: [21.05313823s] [21.05313823s] END
	E0717 02:02:57.949801       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0717 02:02:57.949794       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0717 02:02:57.949627       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 02:02:36.899082036 +0000 UTC m=+0.109619970) (total time: 21.052815793s):
	Trace[1427131847]: [21.052815793s] [21.052815793s] END
	E0717 02:02:57.951281       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a36c9df36a2e] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:32820 - 37958 "HINFO IN 6915635393223143941.138931748865389916. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032250216s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-556100
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-556100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3cfbbb17fd76400a5ee2ea427db7148a0ef7c185
	                    minikube.k8s.io/name=old-k8s-version-556100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_17T02_02_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Jul 2024 02:01:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-556100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Jul 2024 02:11:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Jul 2024 02:07:17 +0000   Wed, 17 Jul 2024 02:01:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Jul 2024 02:07:17 +0000   Wed, 17 Jul 2024 02:01:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Jul 2024 02:07:17 +0000   Wed, 17 Jul 2024 02:01:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Jul 2024 02:07:17 +0000   Wed, 17 Jul 2024 02:02:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-556100
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868764Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb88624f2d924eea93c14a2028840dcc
	  System UUID:                cb88624f2d924eea93c14a2028840dcc
	  Boot ID:                    c8c682c7-038f-4949-bfeb-6c51c261a4de
	  Kernel Version:             5.15.146.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m50s
	  kube-system                 coredns-74ff55c5b-dmjwg                           100m (0%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     9m36s
	  kube-system                 etcd-old-k8s-version-556100                       100m (0%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-apiserver-old-k8s-version-556100             250m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-controller-manager-old-k8s-version-556100    200m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-proxy-bjvpb                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-scheduler-old-k8s-version-556100             100m (0%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 metrics-server-9975d5f86-bj56w                    100m (0%!)(MISSING)     0 (0%!)(MISSING)      200Mi (0%!)(MISSING)       0 (0%!)(MISSING)         7m36s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-qpkxf         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-4c4d4               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x7 over 10m)      kubelet     Node old-k8s-version-556100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)      kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m51s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m51s                  kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m51s                  kubelet     Node old-k8s-version-556100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m51s                  kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m51s                  kubelet     Node old-k8s-version-556100 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m41s                  kubelet     Node old-k8s-version-556100 status is now: NodeReady
	  Normal  Starting                 9m23s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m19s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m18s (x8 over 6m19s)  kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m18s (x8 over 6m19s)  kubelet     Node old-k8s-version-556100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m18s (x7 over 6m19s)  kubelet     Node old-k8s-version-556100 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jul17 01:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 01:56] tmpfs: Unknown parameter 'noswap'
	[  +6.146384] tmpfs: Unknown parameter 'noswap'
	[Jul17 01:58] hrtimer: interrupt took 366840 ns
	[Jul17 02:00] tmpfs: Unknown parameter 'noswap'
	[ +17.928153] tmpfs: Unknown parameter 'noswap'
	[Jul17 02:03] tmpfs: Unknown parameter 'noswap'
	[ +11.241360] tmpfs: Unknown parameter 'noswap'
	[Jul17 02:05] tmpfs: Unknown parameter 'noswap'
	[Jul17 02:10] tmpfs: Unknown parameter 'noswap'
	[Jul17 02:11] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [0c11f86257e5] <==
	2024-07-17 02:11:18.603863 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" " with result "range_response_count:1 size:4052" took too long (121.541131ms) to execute
	2024-07-17 02:11:19.845062 W | etcdserver: read-only range request "key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true " with result "range_response_count:0 size:5" took too long (238.384347ms) to execute
	2024-07-17 02:11:19.845130 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-556100\" " with result "range_response_count:1 size:5495" took too long (344.590696ms) to execute
	2024-07-17 02:11:20.902574 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-556100\" " with result "range_response_count:1 size:5495" took too long (418.525582ms) to execute
	2024-07-17 02:11:21.027934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:11:26.111352 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" " with result "range_response_count:1 size:4052" took too long (127.79811ms) to execute
	2024-07-17 02:11:26.870849 W | etcdserver: request "header:<ID:15638345903693574863 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:1032 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414973866838799053 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>" with result "size:16" took too long (228.690371ms) to execute
	2024-07-17 02:11:26.871329 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" " with result "range_response_count:1 size:4052" took too long (384.12311ms) to execute
	2024-07-17 02:11:26.871392 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (318.739443ms) to execute
	2024-07-17 02:11:29.268674 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" " with result "range_response_count:1 size:4052" took too long (293.223666ms) to execute
	2024-07-17 02:11:31.030067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:11:35.510210 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (228.293144ms) to execute
	2024-07-17 02:11:39.514265 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (237.206073ms) to execute
	2024-07-17 02:11:41.028951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:11:43.594840 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (316.075109ms) to execute
	2024-07-17 02:11:43.594954 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:5495" took too long (394.25717ms) to execute
	2024-07-17 02:11:45.449869 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (168.228867ms) to execute
	2024-07-17 02:11:46.652357 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (145.6655ms) to execute
	2024-07-17 02:11:49.435933 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (162.515269ms) to execute
	2024-07-17 02:11:51.023314 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:11:53.456678 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (183.554141ms) to execute
	2024-07-17 02:11:56.615739 W | etcdserver: request "header:<ID:15638345903693575055 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.76.2\" mod_revision:1053 > success:<request_put:<key:\"/registry/masterleases/192.168.76.2\" value_size:67 lease:6414973866838799245 >> failure:<request_range:<key:\"/registry/masterleases/192.168.76.2\" > >>" with result "size:16" took too long (115.891001ms) to execute
	2024-07-17 02:11:57.612265 W | etcdserver: read-only range request "key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (115.69868ms) to execute
	2024-07-17 02:11:57.612323 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (166.936963ms) to execute
	2024-07-17 02:12:01.024520 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ba8f75033d8c] <==
	2024-07-17 02:03:22.941019 W | etcdserver: read-only range request "key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true " with result "range_response_count:0 size:7" took too long (679.837833ms) to execute
	2024-07-17 02:03:23.102747 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (139.651391ms) to execute
	2024-07-17 02:03:23.103058 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-qqdhk\" " with result "range_response_count:0 size:5" took too long (101.414177ms) to execute
	2024-07-17 02:03:23.103357 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-74ff55c5b-qqdhk\" " with result "range_response_count:0 size:5" took too long (100.447129ms) to execute
	2024-07-17 02:03:31.847490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:03:36.499660 W | etcdserver: read-only range request "key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" count_only:true " with result "range_response_count:0 size:7" took too long (232.590403ms) to execute
	2024-07-17 02:03:36.500112 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:257" took too long (120.22812ms) to execute
	2024-07-17 02:03:36.746587 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:420" took too long (168.559877ms) to execute
	2024-07-17 02:03:36.746655 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (211.332671ms) to execute
	2024-07-17 02:03:41.847652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:03:51.844317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:04:01.844483 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:04:11.844586 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:04:21.841096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-17 02:04:26.838658 W | etcdserver: request "header:<ID:15638345903630343327 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/metrics-server-9975d5f86-bj56w.17e2ddc9d95167e4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/metrics-server-9975d5f86-bj56w.17e2ddc9d95167e4\" value_size:712 lease:6414973866775567463 >> failure:<>>" with result "size:16" took too long (168.838148ms) to execute
	2024-07-17 02:04:26.911170 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" " with result "range_response_count:1 size:2983" took too long (232.664462ms) to execute
	2024-07-17 02:04:27.975707 W | etcdserver: read-only range request "key:\"/registry/endpointslices/kube-system/metrics-server-fxvd7\" " with result "range_response_count:1 size:1260" took too long (1.067129186s) to execute
	2024-07-17 02:04:27.976070 W | etcdserver: request "header:<ID:15638345903630343331 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" mod_revision:592 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" value_size:3801 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-9975d5f86-bj56w\" > >>" with result "size:16" took too long (415.317893ms) to execute
	2024-07-17 02:04:28.051191 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (515.106828ms) to execute
	2024-07-17 02:04:28.051270 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1120" took too long (233.993914ms) to execute
	2024-07-17 02:04:28.384692 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/07/17 02:04:28 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2024/07/17 02:04:28 grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
	2024-07-17 02:04:28.479312 I | etcdserver: skipped leadership transfer for single voting member cluster
	2024-07-17 02:04:28.568959 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (198.203913ms) to execute
	
	
	==> kernel <==
	 02:12:02 up  3:58,  0 users,  load average: 9.80, 9.43, 8.55
	Linux old-k8s-version-556100 5.15.146.1-microsoft-standard-WSL2 #1 SMP Thu Jan 11 04:09:03 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [e030f8ebfbdb] <==
	W0717 02:04:37.966563       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:37.989930       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:37.994180       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:37.994377       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.007043       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.008451       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.015005       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.052744       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.054541       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.073939       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.095286       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.139948       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.171527       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.173626       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.198712       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.227683       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.253685       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.272380       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.289212       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.324299       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.324533       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.360019       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.388213       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.400699       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0717 02:04:38.440115       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [f1fe02b4a78a] <==
	Trace[2069632168]: [2.653949233s] [2.653949233s] END
	I0717 02:10:31.862165       1 trace.go:205] Trace[1860122485]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.20.0 (linux/amd64) kubernetes/af46c47,client:::1 (17-Jul-2024 02:10:29.201) (total time: 2660ms):
	Trace[1860122485]: ---"About to write a response" 2658ms (02:10:00.860)
	Trace[1860122485]: [2.660336721s] [2.660336721s] END
	I0717 02:10:32.163494       1 trace.go:205] Trace[1866393545]: "GuaranteedUpdate etcd3" type:*core.Event (17-Jul-2024 02:10:31.306) (total time: 857ms):
	Trace[1866393545]: ---"initial value restored" 647ms (02:10:00.953)
	Trace[1866393545]: ---"Transaction committed" 208ms (02:10:00.163)
	Trace[1866393545]: [857.218758ms] [857.218758ms] END
	I0717 02:10:32.163753       1 trace.go:205] Trace[1354425219]: "Patch" url:/api/v1/namespaces/kube-system/events/kube-apiserver-old-k8s-version-556100.17e2de1e0fb9e5fa,user-agent:kubelet/v1.20.0 (linux/amd64) kubernetes/af46c47,client:192.168.76.2 (17-Jul-2024 02:10:31.305) (total time: 857ms):
	Trace[1354425219]: ---"About to apply patch" 647ms (02:10:00.953)
	Trace[1354425219]: ---"Object stored in database" 208ms (02:10:00.163)
	Trace[1354425219]: [857.79872ms] [857.79872ms] END
	I0717 02:10:43.078420       1 trace.go:205] Trace[750107885]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-bj56w,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.76.1 (17-Jul-2024 02:10:42.481) (total time: 597ms):
	Trace[750107885]: ---"About to write a response" 596ms (02:10:00.077)
	Trace[750107885]: [597.132038ms] [597.132038ms] END
	I0717 02:11:02.785804       1 client.go:360] parsed scheme: "passthrough"
	I0717 02:11:02.786054       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0717 02:11:02.786075       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0717 02:11:04.266046       1 handler_proxy.go:102] no RequestInfo found in the context
	E0717 02:11:04.266421       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 02:11:04.266441       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 02:11:41.257668       1 client.go:360] parsed scheme: "passthrough"
	I0717 02:11:41.257826       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0717 02:11:41.257862       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [17ff785b07a1] <==
	E0717 02:07:49.631982       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:07:54.038538       1 request.go:655] Throttling request took 1.048133749s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0717 02:07:54.890571       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:08:20.132115       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:08:26.538121       1 request.go:655] Throttling request took 1.047378569s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0717 02:08:27.390753       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:08:50.632988       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:08:59.039296       1 request.go:655] Throttling request took 1.047036682s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
	W0717 02:08:59.891331       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:09:21.132660       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:09:31.539002       1 request.go:655] Throttling request took 1.048078573s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0717 02:09:32.391763       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:09:51.633470       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:10:04.040491       1 request.go:655] Throttling request took 1.04384124s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0717 02:10:04.892630       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:10:22.133831       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:10:36.540999       1 request.go:655] Throttling request took 1.047193767s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1beta1?timeout=32s
	W0717 02:10:37.394208       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:10:52.635545       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:11:09.042527       1 request.go:655] Throttling request took 1.044794004s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0717 02:11:09.897071       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:11:23.138273       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0717 02:11:41.546758       1 request.go:655] Throttling request took 1.046822624s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0717 02:11:42.400094       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0717 02:11:53.637162       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [4771195745ef] <==
		/usr/local/go/src/math/big/nat.go:1261 +0x9ef
	math/big.(*Int).Exp(0xc000c8b7c0, 0xc000cb54b0, 0xc000b7fc40, 0xc000b7fb80, 0xc000cb54d0)
		/usr/local/go/src/math/big/int.go:509 +0x1a5
	crypto/rsa.decrypt(0x4d9da20, 0xc0000ba540, 0xc000115260, 0xc000cb54b0, 0xc000e04000, 0x20, 0x24)
		/usr/local/go/src/crypto/rsa/rsa.go:535 +0x12c
	crypto/rsa.decryptAndCheck(0x4d9da20, 0xc0000ba540, 0xc000115260, 0xc000cb5638, 0x100, 0x100, 0xc000e04000)
		/usr/local/go/src/crypto/rsa/rsa.go:570 +0x53
	crypto/rsa.signPSSWithSalt(0x4d9da20, 0xc0000ba540, 0xc000115260, 0x5, 0xc000cfa0e0, 0x20, 0x20, 0xc000cfa100, 0x20, 0x20, ...)
		/usr/local/go/src/crypto/rsa/pss.go:217 +0x1c5
	crypto/rsa.SignPSS(0x4d9da20, 0xc0000ba540, 0xc000115260, 0x5, 0xc000cfa0e0, 0x20, 0x20, 0xc000c8ca60, 0x40bf45, 0x3f6fec0, ...)
		/usr/local/go/src/crypto/rsa/pss.go:281 +0x1d7
	crypto/rsa.(*PrivateKey).Sign(0xc000115260, 0x4d9da20, 0xc0000ba540, 0xc000cfa0e0, 0x20, 0x20, 0x4d9da60, 0xc000c8ca60, 0x25, 0x80, ...)
		/usr/local/go/src/crypto/rsa/rsa.go:146 +0x9e
	crypto/tls.(*serverHandshakeStateTLS13).sendServerCertificate(0xc000cb5aa0, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/handshake_server_tls13.go:623 +0x466
	crypto/tls.(*serverHandshakeStateTLS13).handshake(0xc000cb5aa0, 0xc000cf4000, 0x0)
		/usr/local/go/src/crypto/tls/handshake_server_tls13.go:59 +0xc7
	crypto/tls.(*Conn).serverHandshake(0xc0003e4380, 0xc000c8c7e0, 0xf)
		/usr/local/go/src/crypto/tls/handshake_server.go:50 +0xbc
	crypto/tls.(*Conn).Handshake(0xc0003e4380, 0x0, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:1362 +0xc9
	net/http.(*conn).serve(0xc000c8ed20, 0x4e0fb20, 0xc0005da420)
		/usr/local/go/src/net/http/server.go:1817 +0x1a5
	created by net/http.(*Server).Serve
		/usr/local/go/src/net/http/server.go:2969 +0x36c
	
	
	==> kube-proxy [0d09bed0c3c5] <==
	W0717 02:02:37.183593       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:02:37.188626       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:02:37.192795       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:02:37.196325       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:02:37.201388       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:02:37.206391       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0717 02:02:37.244474       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0717 02:02:37.244587       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0717 02:02:39.050338       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0717 02:02:39.050727       1 server_others.go:185] Using iptables Proxier.
	I0717 02:02:39.051988       1 server.go:650] Version: v1.20.0
	I0717 02:02:39.052936       1 config.go:224] Starting endpoint slice config controller
	I0717 02:02:39.052975       1 config.go:315] Starting service config controller
	I0717 02:02:39.052974       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0717 02:02:39.053079       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0717 02:02:39.153570       1 shared_informer.go:247] Caches are synced for service config 
	I0717 02:02:39.153662       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [f60d408f22bf] <==
	W0717 02:06:17.279124       1 proxier.go:651] Failed to read file /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin with error open /lib/modules/5.15.146.1-microsoft-standard-WSL2/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:06:17.284682       1 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:06:17.289949       1 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:06:17.293900       1 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:06:17.297550       1 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	W0717 02:06:17.305071       1 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
	I0717 02:06:17.384868       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0717 02:06:17.385009       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0717 02:06:17.659260       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0717 02:06:17.659538       1 server_others.go:185] Using iptables Proxier.
	I0717 02:06:17.660197       1 server.go:650] Version: v1.20.0
	I0717 02:06:17.661080       1 config.go:315] Starting service config controller
	I0717 02:06:17.661117       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0717 02:06:17.661342       1 config.go:224] Starting endpoint slice config controller
	I0717 02:06:17.661426       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0717 02:06:17.761597       1 shared_informer.go:247] Caches are synced for service config 
	I0717 02:06:17.769761       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [66ed77ac46f7] <==
	E0717 02:01:58.056964       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.171284       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 02:01:58.306704       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 02:01:58.324425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 02:01:58.379869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 02:01:58.402685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 02:01:58.458555       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 02:01:59.724467       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 02:01:59.802906       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 02:01:59.848686       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 02:02:00.262992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 02:02:00.263072       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 02:02:00.441077       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 02:02:00.481664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 02:02:00.510549       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 02:02:00.623213       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 02:02:00.711932       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 02:02:01.036705       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 02:02:01.137257       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 02:02:03.461442       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 02:02:03.888001       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 02:02:04.561682       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 02:02:04.752873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 02:02:05.390945       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0717 02:02:13.719792       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [b5b115665dd7] <==
	I0717 02:05:53.392725       1 serving.go:331] Generated self-signed cert in-memory
	W0717 02:06:03.267153       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 02:06:03.267234       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 02:06:03.267250       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 02:06:03.267260       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 02:06:03.572104       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0717 02:06:03.572184       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0717 02:06:03.572370       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 02:06:03.572388       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 02:06:03.759823       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087520    1879 remote_image.go:113] PullImage "registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087661    1879 kuberuntime_image.go:51] Pull image "registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087862    1879 kuberuntime_manager.go:829] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubernetes-dashboard-token-jc8n9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,Terminatio
nMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd): ErrImagePull: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Jul 17 02:09:51 old-k8s-version-556100 kubelet[1879]: E0717 02:09:51.087904    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.483792    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:02 old-k8s-version-556100 kubelet[1879]: E0717 02:10:02.540111    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:14 old-k8s-version-556100 kubelet[1879]: E0717 02:10:14.481266    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:15 old-k8s-version-556100 kubelet[1879]: E0717 02:10:15.482722    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:27 old-k8s-version-556100 kubelet[1879]: E0717 02:10:27.479567    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:30 old-k8s-version-556100 kubelet[1879]: E0717 02:10:30.485991    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:41 old-k8s-version-556100 kubelet[1879]: E0717 02:10:41.480505    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:43 old-k8s-version-556100 kubelet[1879]: E0717 02:10:43.478329    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:43 old-k8s-version-556100 kubelet[1879]: W0717 02:10:43.574422    1879 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Jul 17 02:10:43 old-k8s-version-556100 kubelet[1879]: W0717 02:10:43.575541    1879 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Jul 17 02:10:52 old-k8s-version-556100 kubelet[1879]: E0717 02:10:52.481369    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:10:54 old-k8s-version-556100 kubelet[1879]: E0717 02:10:54.482186    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:05 old-k8s-version-556100 kubelet[1879]: E0717 02:11:05.478059    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:09 old-k8s-version-556100 kubelet[1879]: E0717 02:11:09.534742    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:19 old-k8s-version-556100 kubelet[1879]: E0717 02:11:19.472912    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:24 old-k8s-version-556100 kubelet[1879]: E0717 02:11:24.471547    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:33 old-k8s-version-556100 kubelet[1879]: E0717 02:11:33.477616    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:37 old-k8s-version-556100 kubelet[1879]: E0717 02:11:37.474142    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:46 old-k8s-version-556100 kubelet[1879]: E0717 02:11:46.473418    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:49 old-k8s-version-556100 kubelet[1879]: E0717 02:11:49.468357    1879 pod_workers.go:191] Error syncing pod e166181c-9918-40cf-8e8d-cd158010b0dd ("dashboard-metrics-scraper-8d5bb5db8-qpkxf_kubernetes-dashboard(e166181c-9918-40cf-8e8d-cd158010b0dd)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Jul 17 02:11:57 old-k8s-version-556100 kubelet[1879]: E0717 02:11:57.469122    1879 pod_workers.go:191] Error syncing pod 976aeaca-4062-499e-a622-502a6906e052 ("metrics-server-9975d5f86-bj56w_kube-system(976aeaca-4062-499e-a622-502a6906e052)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [d2895cf887fb] <==
	2024/07/17 02:07:12 Starting overwatch
	2024/07/17 02:07:12 Using namespace: kubernetes-dashboard
	2024/07/17 02:07:12 Using in-cluster config to connect to apiserver
	2024/07/17 02:07:12 Using secret token for csrf signing
	2024/07/17 02:07:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/17 02:07:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/17 02:07:12 Successful initial request to the apiserver, version: v1.20.0
	2024/07/17 02:07:12 Generating JWE encryption key
	2024/07/17 02:07:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/17 02:07:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/17 02:07:12 Initializing JWE encryption key from synchronized object
	2024/07/17 02:07:12 Creating in-cluster Sidecar client
	2024/07/17 02:07:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:07:12 Serving insecurely on HTTP port: 9090
	2024/07/17 02:07:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:08:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:08:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:09:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:09:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:10:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:10:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:11:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/17 02:11:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d21e9adcbaa5] <==
	I0717 02:06:52.184677       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 02:06:52.206454       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 02:06:52.206854       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 02:07:09.695887       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 02:07:09.696188       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"667a6f75-5943-46ad-90f5-85b506795917", APIVersion:"v1", ResourceVersion:"802", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-556100_14844f7f-d98e-488c-a859-9f80ef71c1bd became leader
	I0717 02:07:09.696335       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556100_14844f7f-d98e-488c-a859-9f80ef71c1bd!
	I0717 02:07:09.798141       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-556100_14844f7f-d98e-488c-a859-9f80ef71c1bd!
	
	
	==> storage-provisioner [ef8db1d0f6c0] <==
	I0717 02:06:17.676248       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 02:06:38.732070       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:11:58.486816    4476 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100: (2.002926s)
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-556100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-bj56w dashboard-metrics-scraper-8d5bb5db8-qpkxf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-556100 describe pod metrics-server-9975d5f86-bj56w dashboard-metrics-scraper-8d5bb5db8-qpkxf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-556100 describe pod metrics-server-9975d5f86-bj56w dashboard-metrics-scraper-8d5bb5db8-qpkxf: exit status 1 (894.1068ms)

                                                
                                                
** stderr ** 
	E0717 02:12:08.028444    8800 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 02:12:08.187555    8800 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 02:12:08.210153    8800 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0717 02:12:08.226726    8800 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-bj56w" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-qpkxf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-556100 describe pod metrics-server-9975d5f86-bj56w dashboard-metrics-scraper-8d5bb5db8-qpkxf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (446.52s)

                                                
                                    

Test pass (317/348)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.45
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 2.52
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.23
12 TestDownloadOnly/v1.30.2/json-events 8.27
13 TestDownloadOnly/v1.30.2/preload-exists 0
16 TestDownloadOnly/v1.30.2/kubectl 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.26
18 TestDownloadOnly/v1.30.2/DeleteAll 2.31
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 1.31
21 TestDownloadOnly/v1.31.0-beta.0/json-events 7.58
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.5
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 1.89
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.19
29 TestDownloadOnlyKic 3.91
30 TestBinaryMirror 3.6
31 TestOffline 250.3
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.32
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.32
36 TestAddons/Setup 595.24
40 TestAddons/parallel/InspektorGadget 15.19
41 TestAddons/parallel/MetricsServer 8.04
42 TestAddons/parallel/HelmTiller 30.59
44 TestAddons/parallel/CSI 75.53
45 TestAddons/parallel/Headlamp 27.25
46 TestAddons/parallel/CloudSpanner 8.02
47 TestAddons/parallel/LocalPath 90.49
48 TestAddons/parallel/NvidiaDevicePlugin 9.61
49 TestAddons/parallel/Yakd 5.03
50 TestAddons/parallel/Volcano 69.02
53 TestAddons/serial/GCPAuth/Namespaces 0.36
54 TestAddons/StoppedEnableDisable 14.81
55 TestCertOptions 88.38
56 TestCertExpiration 378.43
57 TestDockerFlags 90.43
58 TestForceSystemdFlag 174.42
59 TestForceSystemdEnv 138.77
66 TestErrorSpam/start 4.19
67 TestErrorSpam/status 4.21
68 TestErrorSpam/pause 4.07
69 TestErrorSpam/unpause 5.54
70 TestErrorSpam/stop 19.9
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 94.33
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 52.83
77 TestFunctional/serial/KubeContext 0.13
78 TestFunctional/serial/KubectlGetPods 0.25
81 TestFunctional/serial/CacheCmd/cache/add_remote 7.28
82 TestFunctional/serial/CacheCmd/cache/add_local 4.71
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
84 TestFunctional/serial/CacheCmd/cache/list 0.23
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 1.25
86 TestFunctional/serial/CacheCmd/cache/cache_reload 5.7
87 TestFunctional/serial/CacheCmd/cache/delete 0.53
88 TestFunctional/serial/MinikubeKubectlCmd 0.52
90 TestFunctional/serial/ExtraConfig 51.62
91 TestFunctional/serial/ComponentHealth 0.19
92 TestFunctional/serial/LogsCmd 2.88
93 TestFunctional/serial/LogsFileCmd 2.95
94 TestFunctional/serial/InvalidService 6.07
98 TestFunctional/parallel/DryRun 3.38
99 TestFunctional/parallel/InternationalLanguage 1.32
100 TestFunctional/parallel/StatusCmd 7.31
105 TestFunctional/parallel/AddonsCmd 1.11
106 TestFunctional/parallel/PersistentVolumeClaim 96.91
108 TestFunctional/parallel/SSHCmd 2.71
109 TestFunctional/parallel/CpCmd 7.33
110 TestFunctional/parallel/MySQL 90.83
111 TestFunctional/parallel/FileSync 1.67
112 TestFunctional/parallel/CertSync 10.72
116 TestFunctional/parallel/NodeLabels 0.23
118 TestFunctional/parallel/NonActiveRuntimeDisabled 1.7
120 TestFunctional/parallel/License 5.07
121 TestFunctional/parallel/ServiceCmd/DeployApp 27.62
122 TestFunctional/parallel/Version/short 0.27
123 TestFunctional/parallel/Version/components 2.39
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.9
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.92
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.94
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.96
128 TestFunctional/parallel/ImageCommands/ImageBuild 11.52
129 TestFunctional/parallel/ImageCommands/Setup 3.2
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.16
131 TestFunctional/parallel/ProfileCmd/profile_not_create 3.38
132 TestFunctional/parallel/ProfileCmd/profile_list 2.35
133 TestFunctional/parallel/ProfileCmd/profile_json_output 2.39
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.52
135 TestFunctional/parallel/DockerEnv/powershell 11.61
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.02
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.04
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.73
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.68
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.72
141 TestFunctional/parallel/ImageCommands/ImageRemove 1.96
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.51
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.46
145 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 2.61
146 TestFunctional/parallel/ServiceCmd/List 2.75
147 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 35.01
150 TestFunctional/parallel/ServiceCmd/JSONOutput 2.2
151 TestFunctional/parallel/ServiceCmd/HTTPS 15.03
152 TestFunctional/parallel/ServiceCmd/Format 15.05
153 TestFunctional/parallel/ServiceCmd/URL 15.02
154 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.31
159 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.24
160 TestFunctional/delete_echo-server_images 0.41
161 TestFunctional/delete_my-image_image 0.19
162 TestFunctional/delete_minikube_cached_images 0.17
166 TestMultiControlPlane/serial/StartCluster 279.7
167 TestMultiControlPlane/serial/DeployApp 28.71
168 TestMultiControlPlane/serial/PingHostFromPods 3.87
169 TestMultiControlPlane/serial/AddWorkerNode 75.31
170 TestMultiControlPlane/serial/NodeLabels 0.21
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 3.6
172 TestMultiControlPlane/serial/CopyFile 75.27
173 TestMultiControlPlane/serial/StopSecondaryNode 15.86
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.83
175 TestMultiControlPlane/serial/RestartSecondaryNode 67.86
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.72
177 TestMultiControlPlane/serial/RestartClusterKeepsNodes 301.25
178 TestMultiControlPlane/serial/DeleteSecondaryNode 24.01
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.57
180 TestMultiControlPlane/serial/StopCluster 38.44
181 TestMultiControlPlane/serial/RestartCluster 131.95
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.63
183 TestMultiControlPlane/serial/AddSecondaryNode 89.13
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.61
187 TestImageBuild/serial/Setup 73.51
188 TestImageBuild/serial/NormalBuild 4.28
189 TestImageBuild/serial/BuildWithBuildArg 2.63
190 TestImageBuild/serial/BuildWithDockerIgnore 1.71
191 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.09
195 TestJSONOutput/start/Command 90.12
196 TestJSONOutput/start/Audit 0
198 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/pause/Command 1.75
202 TestJSONOutput/pause/Audit 0
204 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/unpause/Command 1.66
208 TestJSONOutput/unpause/Audit 0
210 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
213 TestJSONOutput/stop/Command 13.01
214 TestJSONOutput/stop/Audit 0
216 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
217 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
218 TestErrorJSONOutput 1.48
220 TestKicCustomNetwork/create_custom_network 83.1
221 TestKicCustomNetwork/use_default_bridge_network 87.04
222 TestKicExistingNetwork 87.35
223 TestKicCustomSubnet 80.88
224 TestKicStaticIP 82.4
225 TestMainNoArgs 0.26
226 TestMinikubeProfile 158.28
229 TestMountStart/serial/StartWithMountFirst 32.08
230 TestMountStart/serial/VerifyMountFirst 1.21
231 TestMountStart/serial/StartWithMountSecond 31.39
232 TestMountStart/serial/VerifyMountSecond 1.18
233 TestMountStart/serial/DeleteFirst 4.14
234 TestMountStart/serial/VerifyMountPostDelete 1.19
235 TestMountStart/serial/Stop 2.63
236 TestMountStart/serial/RestartStopped 23.6
237 TestMountStart/serial/VerifyMountPostStop 1.25
240 TestMultiNode/serial/FreshStart2Nodes 179.06
241 TestMultiNode/serial/DeployApp2Nodes 35.44
242 TestMultiNode/serial/PingHostFrom2Pods 2.54
243 TestMultiNode/serial/AddNode 68.85
244 TestMultiNode/serial/MultiNodeLabels 0.21
245 TestMultiNode/serial/ProfileList 1.49
246 TestMultiNode/serial/CopyFile 41.83
247 TestMultiNode/serial/StopNode 6.87
248 TestMultiNode/serial/StartAfterStop 27.36
249 TestMultiNode/serial/RestartKeepsNodes 164.95
250 TestMultiNode/serial/DeleteNode 13.84
251 TestMultiNode/serial/StopMultiNode 25.51
252 TestMultiNode/serial/RestartMultiNode 80.19
253 TestMultiNode/serial/ValidateNameConflict 76.87
257 TestPreload 197.57
258 TestScheduledStopWindows 141.58
262 TestInsufficientStorage 54.74
263 TestRunningBinaryUpgrade 423.31
265 TestKubernetesUpgrade 658.4
266 TestMissingContainerUpgrade 513.85
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
269 TestNoKubernetes/serial/StartWithK8s 145.89
270 TestNoKubernetes/serial/StartWithStopK8s 66.64
271 TestNoKubernetes/serial/Start 33.09
272 TestNoKubernetes/serial/VerifyK8sNotRunning 1.67
273 TestNoKubernetes/serial/ProfileList 22.45
282 TestPause/serial/Start 168.19
283 TestNoKubernetes/serial/Stop 4.34
284 TestNoKubernetes/serial/StartNoArgs 19.34
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 1.38
286 TestPause/serial/SecondStartNoReconfiguration 86.78
287 TestPause/serial/Pause 2.1
288 TestPause/serial/VerifyStatus 1.72
289 TestPause/serial/Unpause 2.18
290 TestPause/serial/PauseAgain 2.35
291 TestPause/serial/DeletePaused 6.79
303 TestPause/serial/VerifyDeletedResources 3.4
304 TestStoppedBinaryUpgrade/Setup 1.64
305 TestStoppedBinaryUpgrade/Upgrade 212.81
307 TestStartStop/group/old-k8s-version/serial/FirstStart 297.33
308 TestStoppedBinaryUpgrade/MinikubeLogs 5.63
310 TestStartStop/group/embed-certs/serial/FirstStart 144.33
312 TestStartStop/group/no-preload/serial/FirstStart 160.5
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 139.79
315 TestStartStop/group/embed-certs/serial/DeployApp 10.82
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.15
317 TestStartStop/group/embed-certs/serial/Stop 13.35
318 TestStartStop/group/no-preload/serial/DeployApp 9.83
319 TestStartStop/group/old-k8s-version/serial/DeployApp 11.25
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 1.24
321 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.05
322 TestStartStop/group/embed-certs/serial/SecondStart 303.97
323 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.4
324 TestStartStop/group/no-preload/serial/Stop 14.95
325 TestStartStop/group/old-k8s-version/serial/Stop 14.04
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.98
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 1.32
328 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 1.37
329 TestStartStop/group/no-preload/serial/SecondStart 320.31
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.5
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 14.43
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 1.75
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 334.22
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.03
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.42
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.94
338 TestStartStop/group/embed-certs/serial/Pause 10.9
340 TestStartStop/group/newest-cni/serial/FirstStart 139.82
341 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
342 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.47
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 1.06
344 TestStartStop/group/no-preload/serial/Pause 11.31
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 7.03
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.63
347 TestNetworkPlugins/group/auto/Start 153.13
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 1.17
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 12.79
350 TestNetworkPlugins/group/kindnet/Start 160.64
351 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.03
352 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.73
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.83
355 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.2
356 TestStartStop/group/old-k8s-version/serial/Pause 13.27
357 TestStartStop/group/newest-cni/serial/Stop 8.94
358 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 1.59
359 TestStartStop/group/newest-cni/serial/SecondStart 60.1
360 TestNetworkPlugins/group/calico/Start 203.86
361 TestNetworkPlugins/group/auto/KubeletFlags 1.56
362 TestNetworkPlugins/group/auto/NetCatPod 23.51
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 1.15
366 TestStartStop/group/newest-cni/serial/Pause 11.85
367 TestNetworkPlugins/group/auto/DNS 0.38
368 TestNetworkPlugins/group/auto/Localhost 0.51
369 TestNetworkPlugins/group/auto/HairPin 0.35
370 TestNetworkPlugins/group/custom-flannel/Start 181.27
371 TestNetworkPlugins/group/kindnet/ControllerPod 6.02
372 TestNetworkPlugins/group/kindnet/KubeletFlags 1.42
373 TestNetworkPlugins/group/kindnet/NetCatPod 41.73
374 TestNetworkPlugins/group/kindnet/DNS 0.38
375 TestNetworkPlugins/group/kindnet/Localhost 1.15
376 TestNetworkPlugins/group/kindnet/HairPin 0.37
377 TestNetworkPlugins/group/false/Start 115.67
378 TestNetworkPlugins/group/calico/ControllerPod 6.04
379 TestNetworkPlugins/group/calico/KubeletFlags 1.55
380 TestNetworkPlugins/group/enable-default-cni/Start 128.8
381 TestNetworkPlugins/group/calico/NetCatPod 40.8
382 TestNetworkPlugins/group/custom-flannel/KubeletFlags 1.33
383 TestNetworkPlugins/group/custom-flannel/NetCatPod 18.8
384 TestNetworkPlugins/group/false/KubeletFlags 1.48
385 TestNetworkPlugins/group/calico/DNS 0.44
386 TestNetworkPlugins/group/calico/Localhost 0.34
387 TestNetworkPlugins/group/false/NetCatPod 20.74
388 TestNetworkPlugins/group/calico/HairPin 0.42
389 TestNetworkPlugins/group/custom-flannel/DNS 0.39
390 TestNetworkPlugins/group/custom-flannel/Localhost 0.35
391 TestNetworkPlugins/group/custom-flannel/HairPin 0.34
392 TestNetworkPlugins/group/false/DNS 0.4
393 TestNetworkPlugins/group/false/Localhost 0.34
394 TestNetworkPlugins/group/false/HairPin 0.34
395 TestNetworkPlugins/group/flannel/Start 159.65
396 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 1.48
397 TestNetworkPlugins/group/enable-default-cni/NetCatPod 42.3
398 TestNetworkPlugins/group/bridge/Start 117.41
399 TestNetworkPlugins/group/kubenet/Start 144.37
400 TestNetworkPlugins/group/enable-default-cni/DNS 0.59
401 TestNetworkPlugins/group/enable-default-cni/Localhost 0.33
402 TestNetworkPlugins/group/enable-default-cni/HairPin 0.32
403 TestNetworkPlugins/group/bridge/KubeletFlags 1.28
404 TestNetworkPlugins/group/bridge/NetCatPod 17.62
405 TestNetworkPlugins/group/flannel/ControllerPod 6.03
406 TestNetworkPlugins/group/flannel/KubeletFlags 1.45
407 TestNetworkPlugins/group/flannel/NetCatPod 18.63
408 TestNetworkPlugins/group/bridge/DNS 0.36
409 TestNetworkPlugins/group/bridge/Localhost 0.35
410 TestNetworkPlugins/group/bridge/HairPin 0.33
411 TestNetworkPlugins/group/kubenet/KubeletFlags 1.34
412 TestNetworkPlugins/group/flannel/DNS 0.39
413 TestNetworkPlugins/group/flannel/Localhost 0.34
414 TestNetworkPlugins/group/flannel/HairPin 0.36
415 TestNetworkPlugins/group/kubenet/NetCatPod 19.67
416 TestNetworkPlugins/group/kubenet/DNS 0.4
417 TestNetworkPlugins/group/kubenet/Localhost 0.35
418 TestNetworkPlugins/group/kubenet/HairPin 0.39
x
+
TestDownloadOnly/v1.20.0/json-events (13.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-528400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-528400 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (13.4534149s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-528400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-528400: exit status 85 (277.3557ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-528400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |          |
	|         | -p download-only-528400        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:20:13
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:20:13.667182    4232 out.go:291] Setting OutFile to fd 616 ...
	I0717 00:20:13.667819    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:13.667819    4232 out.go:304] Setting ErrFile to fd 620...
	I0717 00:20:13.667819    4232 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0717 00:20:13.681222    4232 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0717 00:20:13.691568    4232 out.go:298] Setting JSON to true
	I0717 00:20:13.693071    4232 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7628,"bootTime":1721167984,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:20:13.693071    4232 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:20:13.701802    4232 out.go:97] [download-only-528400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	W0717 00:20:13.702440    4232 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0717 00:20:13.702440    4232 notify.go:220] Checking for updates...
	I0717 00:20:13.702782    4232 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:20:13.708734    4232 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:20:13.709099    4232 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:20:13.714123    4232 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0717 00:20:13.722680    4232 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:20:13.723811    4232 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:20:14.008046    4232 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:20:14.022399    4232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:15.420136    4232 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.3971269s)
	I0717 00:20:15.421135    4232 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:15.361218987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:15.424479    4232 out.go:97] Using the docker driver based on user configuration
	I0717 00:20:15.424479    4232 start.go:297] selected driver: docker
	I0717 00:20:15.424479    4232 start.go:901] validating driver "docker" against <nil>
	I0717 00:20:15.438611    4232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:15.811880    4232 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:15.756525594 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:15.812291    4232 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:20:15.939813    4232 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0717 00:20:15.941198    4232 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:20:15.942970    4232 out.go:169] Using Docker Desktop driver with root privileges
	I0717 00:20:15.948696    4232 cni.go:84] Creating CNI manager for ""
	I0717 00:20:15.948696    4232 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0717 00:20:15.948696    4232 start.go:340] cluster config:
	{Name:download-only-528400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-528400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:20:15.950706    4232 out.go:97] Starting "download-only-528400" primary control-plane node in "download-only-528400" cluster
	I0717 00:20:15.950706    4232 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 00:20:15.953688    4232 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 00:20:15.953688    4232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 00:20:15.953688    4232 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 00:20:16.010936    4232 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0717 00:20:16.018020    4232 cache.go:56] Caching tarball of preloaded images
	I0717 00:20:16.018087    4232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 00:20:16.021363    4232 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0717 00:20:16.021464    4232 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 00:20:16.143749    4232 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0717 00:20:16.145415    4232 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:20:16.145742    4232 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:16.146464    4232 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:16.146464    4232 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 00:20:16.147791    4232 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:20:20.666055    4232 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 00:20:20.670012    4232 preload.go:254] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 00:20:21.690356    4232 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0717 00:20:21.693781    4232 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-528400\config.json ...
	I0717 00:20:21.694468    4232 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-528400\config.json: {Name:mk3670f7e27b09e8013ab7aa4a2774f3cccc40ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 00:20:21.694634    4232 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0717 00:20:21.696056    4232 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-528400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-528400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:20:27.131899    9112 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (2.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.5213622s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (2.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-528400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-528400: (1.2251833s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (8.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-168500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-168500 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=docker: (8.2645815s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (8.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
--- PASS: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-168500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-168500: exit status 85 (250.8946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-528400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-528400        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-528400        | download-only-528400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| start   | -o=json --download-only        | download-only-168500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-168500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:20:31
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:20:31.235258    6176 out.go:291] Setting OutFile to fd 696 ...
	I0717 00:20:31.235258    6176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:31.235258    6176 out.go:304] Setting ErrFile to fd 712...
	I0717 00:20:31.235258    6176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:31.265320    6176 out.go:298] Setting JSON to true
	I0717 00:20:31.266364    6176 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7646,"bootTime":1721167984,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:20:31.266364    6176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:20:31.275581    6176 out.go:97] [download-only-168500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:20:31.275581    6176 notify.go:220] Checking for updates...
	I0717 00:20:31.278991    6176 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:20:31.281856    6176 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:20:31.284271    6176 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:20:31.287474    6176 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0717 00:20:31.293418    6176 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:20:31.294208    6176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:20:31.584214    6176 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:20:31.594572    6176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:31.950658    6176 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:31.896537781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:32.088311    6176 out.go:97] Using the docker driver based on user configuration
	I0717 00:20:32.088311    6176 start.go:297] selected driver: docker
	I0717 00:20:32.103621    6176 start.go:901] validating driver "docker" against <nil>
	I0717 00:20:32.125946    6176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:32.504601    6176 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:32.454571014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:32.505361    6176 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:20:32.557379    6176 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0717 00:20:32.558649    6176 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:20:32.559057    6176 out.go:169] Using Docker Desktop driver with root privileges
	I0717 00:20:32.563873    6176 cni.go:84] Creating CNI manager for ""
	I0717 00:20:32.563873    6176 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:20:32.564024    6176 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:20:32.564313    6176 start.go:340] cluster config:
	{Name:download-only-168500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-168500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:20:32.566804    6176 out.go:97] Starting "download-only-168500" primary control-plane node in "download-only-168500" cluster
	I0717 00:20:32.566903    6176 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 00:20:32.569220    6176 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 00:20:32.569220    6176 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:20:32.569772    6176 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 00:20:32.630331    6176 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 00:20:32.630454    6176 cache.go:56] Caching tarball of preloaded images
	I0717 00:20:32.631061    6176 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0717 00:20:32.633765    6176 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0717 00:20:32.633887    6176 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 ...
	I0717 00:20:32.736174    6176 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4?checksum=md5:f94875995e68df9a8856f3277eea0126 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0717 00:20:32.757144    6176 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:20:32.757144    6176 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:32.757144    6176 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:32.757144    6176 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 00:20:32.757144    6176 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 00:20:32.758930    6176 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 00:20:32.759150    6176 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	
	
	* The control-plane node download-only-168500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-168500"

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:20:39.444127    6052 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (2.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (2.3107002s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (2.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-168500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-168500: (1.3015242s)
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (1.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (7.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-469400 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-469400 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=docker: (7.5722659s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (7.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-469400
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-469400: exit status 85 (492.1619ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-528400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-528400             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-528400             | download-only-528400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| start   | -o=json --download-only             | download-only-168500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-168500             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.2        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| delete  | -p download-only-168500             | download-only-168500 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC | 17 Jul 24 00:20 UTC |
	| start   | -o=json --download-only             | download-only-469400 | minikube3\jenkins | v1.33.1 | 17 Jul 24 00:20 UTC |                     |
	|         | -p download-only-469400             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=docker                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/17 00:20:43
	Running on machine: minikube3
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 00:20:43.419296    2144 out.go:291] Setting OutFile to fd 720 ...
	I0717 00:20:43.420234    2144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:43.420234    2144 out.go:304] Setting ErrFile to fd 780...
	I0717 00:20:43.420234    2144 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:20:43.447463    2144 out.go:298] Setting JSON to true
	I0717 00:20:43.453088    2144 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7658,"bootTime":1721167984,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:20:43.453088    2144 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:20:43.455366    2144 out.go:97] [download-only-469400] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:20:43.455366    2144 notify.go:220] Checking for updates...
	I0717 00:20:43.466203    2144 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:20:43.469224    2144 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:20:43.472050    2144 out.go:169] MINIKUBE_LOCATION=19264
	I0717 00:20:43.473313    2144 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0717 00:20:43.480026    2144 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 00:20:43.481074    2144 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:20:43.767714    2144 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:20:43.778169    2144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:44.147957    2144 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:44.092447911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:44.176441    2144 out.go:97] Using the docker driver based on user configuration
	I0717 00:20:44.176441    2144 start.go:297] selected driver: docker
	I0717 00:20:44.176441    2144 start.go:901] validating driver "docker" against <nil>
	I0717 00:20:44.200679    2144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:20:44.549571    2144 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:80 SystemTime:2024-07-17 00:20:44.508587654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:20:44.549571    2144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0717 00:20:44.596905    2144 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0717 00:20:44.598852    2144 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 00:20:44.602423    2144 out.go:169] Using Docker Desktop driver with root privileges
	I0717 00:20:44.606199    2144 cni.go:84] Creating CNI manager for ""
	I0717 00:20:44.606199    2144 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0717 00:20:44.606199    2144 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0717 00:20:44.606199    2144 start.go:340] cluster config:
	{Name:download-only-469400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-469400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseIn
terval:1m0s}
	I0717 00:20:44.610895    2144 out.go:97] Starting "download-only-469400" primary control-plane node in "download-only-469400" cluster
	I0717 00:20:44.610895    2144 cache.go:121] Beginning downloading kic base image for docker with docker
	I0717 00:20:44.613792    2144 out.go:97] Pulling base image v0.0.44-1721146479-19264 ...
	I0717 00:20:44.613792    2144 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 00:20:44.613792    2144 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
	I0717 00:20:44.671456    2144 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0717 00:20:44.671456    2144 cache.go:56] Caching tarball of preloaded images
	I0717 00:20:44.672879    2144 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0717 00:20:44.676049    2144 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0717 00:20:44.676164    2144 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 00:20:44.781001    2144 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0717 00:20:44.813744    2144 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
	I0717 00:20:44.813744    2144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:44.814449    2144 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.44-1721146479-19264@sha256_7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e.tar
	I0717 00:20:44.814502    2144 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
	I0717 00:20:44.814549    2144 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
	I0717 00:20:44.814549    2144 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
	I0717 00:20:44.814549    2144 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
	
	
	* The control-plane node download-only-469400 host does not exist
	  To start a cluster, run: "minikube start -p download-only-469400"

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:20:50.920257    8908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.8809971s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-469400
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-469400: (1.180161s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.19s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.91s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-879300 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-879300 --alsologtostderr --driver=docker: (1.5962658s)
helpers_test.go:175: Cleaning up "download-docker-879300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-879300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-879300: (1.4170031s)
--- PASS: TestDownloadOnlyKic (3.91s)

                                                
                                    
x
+
TestBinaryMirror (3.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-561300 --alsologtostderr --binary-mirror http://127.0.0.1:62154 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-561300 --alsologtostderr --binary-mirror http://127.0.0.1:62154 --driver=docker: (1.9276844s)
helpers_test.go:175: Cleaning up "binary-mirror-561300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-561300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-561300: (1.4261198s)
--- PASS: TestBinaryMirror (3.60s)

                                                
                                    
x
+
TestOffline (250.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-222100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-222100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (4m3.6022287s)
helpers_test.go:175: Cleaning up "offline-docker-222100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-222100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-222100: (6.699055s)
--- PASS: TestOffline (250.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-285600
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-285600: exit status 85 (312.6043ms)

                                                
                                                
-- stdout --
	* Profile "addons-285600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-285600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:21:05.974935   12520 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-285600
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-285600: exit status 85 (319.2029ms)

                                                
                                                
-- stdout --
	* Profile "addons-285600" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-285600"

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:21:05.974935   12496 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.32s)

                                                
                                    
x
+
TestAddons/Setup (595.24s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-285600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-285600 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (9m55.2384173s)
--- PASS: TestAddons/Setup (595.24s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (15.19s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6qzgl" [0e0e2483-5fe8-486c-801e-2633b484ab27] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0813744s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-285600
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-285600: (9.1084046s)
--- PASS: TestAddons/parallel/InspektorGadget (15.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 7.1065ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-mr9k5" [f5d4dd4d-71da-4ac8-9570-0b6d36289353] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0193723s
addons_test.go:417: (dbg) Run:  kubectl --context addons-285600 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable metrics-server --alsologtostderr -v=1: (2.8075826s)
--- PASS: TestAddons/parallel/MetricsServer (8.04s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (30.59s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 62.7739ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-94x7l" [efa0aad9-f794-4cd2-96ca-89ef18232474] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0280879s
addons_test.go:475: (dbg) Run:  kubectl --context addons-285600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-285600 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (22.3385839s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable helm-tiller --alsologtostderr -v=1: (3.1237273s)
--- PASS: TestAddons/parallel/HelmTiller (30.59s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 19.7915ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-285600 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-285600 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [47563b10-1e5b-4d20-a6b2-67abde8c164e] Pending
helpers_test.go:344: "task-pv-pod" [47563b10-1e5b-4d20-a6b2-67abde8c164e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [47563b10-1e5b-4d20-a6b2-67abde8c164e] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.0962876s
addons_test.go:586: (dbg) Run:  kubectl --context addons-285600 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-285600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-285600 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-285600 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-285600 delete pod task-pv-pod: (4.9720591s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-285600 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-285600 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-285600 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [cc5faee4-50ca-4562-9883-69c7cacff0b2] Pending
helpers_test.go:344: "task-pv-pod-restore" [cc5faee4-50ca-4562-9883-69c7cacff0b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [cc5faee4-50ca-4562-9883-69c7cacff0b2] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0185057s
addons_test.go:628: (dbg) Run:  kubectl --context addons-285600 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-285600 delete pod task-pv-pod-restore: (1.8735367s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-285600 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-285600 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.7254798s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable volumesnapshots --alsologtostderr -v=1: (2.3617681s)
--- PASS: TestAddons/parallel/CSI (75.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (27.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-285600 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-285600 --alsologtostderr -v=1: (2.2154334s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-wpq7n" [ce7907e8-d14e-4d35-9ba7-b002d651c0e9] Pending
helpers_test.go:344: "headlamp-7867546754-wpq7n" [ce7907e8-d14e-4d35-9ba7-b002d651c0e9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-wpq7n" [ce7907e8-d14e-4d35-9ba7-b002d651c0e9] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-wpq7n" [ce7907e8-d14e-4d35-9ba7-b002d651c0e9] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 25.0353608s
--- PASS: TestAddons/parallel/Headlamp (27.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-bhv5l" [468f8c0c-607f-4f1d-9396-c90b84db0be1] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0154717s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-285600
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-285600: (2.9776257s)
--- PASS: TestAddons/parallel/CloudSpanner (8.02s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (90.49s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-285600 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-285600 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ddbe7913-8b1f-4cdd-ab25-76e3ca9cb7be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ddbe7913-8b1f-4cdd-ab25-76e3ca9cb7be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ddbe7913-8b1f-4cdd-ab25-76e3ca9cb7be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 13.0179941s
addons_test.go:992: (dbg) Run:  kubectl --context addons-285600 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 ssh "cat /opt/local-path-provisioner/pvc-bf222cc7-b83d-40d5-a3e3-6b40029f896b_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 ssh "cat /opt/local-path-provisioner/pvc-bf222cc7-b83d-40d5-a3e3-6b40029f896b_default_test-pvc/file1": (1.2592579s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-285600 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-285600 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (45.1787543s)
--- PASS: TestAddons/parallel/LocalPath (90.49s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (9.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bqpr8" [5ca506e8-4ed8-4d01-b876-fc9da45f4226] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0219233s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-285600
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-285600: (3.5799384s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (9.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-zjfzg" [9a5f08a3-fda1-491e-9ee6-d6a85611957e] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0225068s
--- PASS: TestAddons/parallel/Yakd (5.03s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (69.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 38.4201ms
addons_test.go:889: volcano-scheduler stabilized in 43.7153ms
addons_test.go:897: volcano-admission stabilized in 44.1459ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-7q994" [6a399998-f1f0-407d-872d-405f6d8d075b] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.0165977s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-5rldj" [f7b728e7-d533-4fe9-857e-949037292207] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.067884s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-v67k5" [819169a2-4ca9-4e16-ba0d-7f9dea9498f2] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.0204624s
addons_test.go:924: (dbg) Run:  kubectl --context addons-285600 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-285600 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-285600 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c5112f01-139b-42e6-9607-9d06f43a129d] Pending
helpers_test.go:344: "test-job-nginx-0" [c5112f01-139b-42e6-9607-9d06f43a129d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [c5112f01-139b-42e6-9607-9d06f43a129d] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 34.0209355s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-285600 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-285600 addons disable volcano --alsologtostderr -v=1: (17.3305431s)
--- PASS: TestAddons/parallel/Volcano (69.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-285600 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-285600 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.36s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (14.81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-285600
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-285600: (13.1026884s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-285600
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-285600
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-285600
--- PASS: TestAddons/StoppedEnableDisable (14.81s)

                                                
                                    
x
+
TestCertOptions (88.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-482900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-482900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m18.7371012s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-482900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-482900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.4364655s)
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-482900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-482900 -- "sudo cat /etc/kubernetes/admin.conf": (1.2988543s)
helpers_test.go:175: Cleaning up "cert-options-482900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-482900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-482900: (6.6850109s)
--- PASS: TestCertOptions (88.38s)

                                                
                                    
x
+
TestCertExpiration (378.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-314200 --memory=2048 --cert-expiration=3m --driver=docker
E0717 01:56:01.464491    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-314200 --memory=2048 --cert-expiration=3m --driver=docker: (2m2.6337861s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-314200 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-314200 --memory=2048 --cert-expiration=8760h --driver=docker: (1m8.5305143s)
helpers_test.go:175: Cleaning up "cert-expiration-314200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-314200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-314200: (7.2465932s)
--- PASS: TestCertExpiration (378.43s)

                                                
                                    
x
+
TestDockerFlags (90.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-397500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
E0717 01:56:53.120149    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-397500 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m17.1269983s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-397500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-397500 ssh "sudo systemctl show docker --property=Environment --no-pager": (1.2465422s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-397500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-397500 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (1.2347001s)
helpers_test.go:175: Cleaning up "docker-flags-397500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-397500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-397500: (10.8225891s)
--- PASS: TestDockerFlags (90.43s)

                                                
                                    
x
+
TestForceSystemdFlag (174.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-640200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-640200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (2m45.0264531s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-640200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-640200 ssh "docker info --format {{.CgroupDriver}}": (1.3815445s)
helpers_test.go:175: Cleaning up "force-systemd-flag-640200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-640200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-640200: (8.0061512s)
--- PASS: TestForceSystemdFlag (174.42s)

                                                
                                    
x
+
TestForceSystemdEnv (138.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-724600 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-724600 --memory=2048 --alsologtostderr -v=5 --driver=docker: (2m11.3025289s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-724600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-724600 ssh "docker info --format {{.CgroupDriver}}": (1.3225064s)
helpers_test.go:175: Cleaning up "force-systemd-env-724600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-724600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-724600: (6.146777s)
--- PASS: TestForceSystemdEnv (138.77s)

                                                
                                    
x
+
TestErrorSpam/start (4.19s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run: (1.4593012s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run: (1.3516811s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 start --dry-run: (1.3752678s)
--- PASS: TestErrorSpam/start (4.19s)

                                                
                                    
x
+
TestErrorSpam/status (4.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status: (1.3978975s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status: (1.4099695s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 status: (1.4037046s)
--- PASS: TestErrorSpam/status (4.21s)

                                                
                                    
x
+
TestErrorSpam/pause (4.07s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause: (1.553801s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause: (1.2457663s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 pause: (1.2650893s)
--- PASS: TestErrorSpam/pause (4.07s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause
E0717 00:42:23.458662    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause: (2.0186777s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause: (1.4390514s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 unpause: (2.084642s)
--- PASS: TestErrorSpam/unpause (5.54s)

                                                
                                    
x
+
TestErrorSpam/stop (19.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop: (11.5207207s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop: (3.4660439s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-480800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-480800 stop: (4.9105744s)
--- PASS: TestErrorSpam/stop (19.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\7712\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (94.33s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
E0717 00:43:45.382711    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-965000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m34.3224837s)
--- PASS: TestFunctional/serial/StartWithProxy (94.33s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-965000 --alsologtostderr -v=8: (52.8255542s)
functional_test.go:659: soft start took 52.8281118s for "functional-965000" cluster.
--- PASS: TestFunctional/serial/SoftStart (52.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-965000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:3.1: (2.4397841s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:3.3: (2.4430336s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cache add registry.k8s.io/pause:latest: (2.3917421s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-965000 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2102948774\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-965000 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2102948774\001: (2.3944852s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache add minikube-local-cache-test:functional-965000
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cache add minikube-local-cache-test:functional-965000: (1.7964321s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache delete minikube-local-cache-test:functional-965000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-965000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl images: (1.2440684s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh sudo docker rmi registry.k8s.io/pause:latest: (1.2599986s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (1.25589s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:45:36.368314    5412 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cache reload: (1.9121574s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (1.2561873s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.53s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 kubectl -- --context functional-965000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 00:46:01.435882    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:46:29.236697    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-965000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.6181247s)
functional_test.go:757: restart took 51.6186616s for "functional-965000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (51.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-965000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 logs: (2.8752393s)
--- PASS: TestFunctional/serial/LogsCmd (2.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3650190882\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3650190882\001\logs.txt: (2.9476812s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (6.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-965000 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-965000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-965000: exit status 115 (1.5773478s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31033 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:46:50.052602    6568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_service_5a553248039ac2ab6beea740c8d8ce1b809666c7_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-965000 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (6.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.4836353s)

                                                
                                                
-- stdout --
	* [functional-965000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:47:58.319448   14284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 00:47:58.446827   14284 out.go:291] Setting OutFile to fd 692 ...
	I0717 00:47:58.447808   14284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:47:58.447808   14284 out.go:304] Setting ErrFile to fd 992...
	I0717 00:47:58.447945   14284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:47:58.475923   14284 out.go:298] Setting JSON to false
	I0717 00:47:58.481791   14284 start.go:129] hostinfo: {"hostname":"minikube3","uptime":9293,"bootTime":1721167984,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:47:58.481791   14284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:47:58.488112   14284 out.go:177] * [functional-965000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:47:58.498533   14284 notify.go:220] Checking for updates...
	I0717 00:47:58.500689   14284 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:47:58.506860   14284 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:47:58.511181   14284 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:47:58.515792   14284 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:47:58.523074   14284 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:47:58.528262   14284 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:47:58.529596   14284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:47:58.953816   14284 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:47:58.967550   14284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:47:59.476884   14284 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-17 00:47:59.413114039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion
:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:D
ocker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:47:59.481333   14284 out.go:177] * Using the docker driver based on existing profile
	I0717 00:47:59.484772   14284 start.go:297] selected driver: docker
	I0717 00:47:59.484867   14284 start.go:901] validating driver "docker" against &{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:47:59.484867   14284 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:47:59.585778   14284 out.go:177] 
	W0717 00:47:59.589730   14284 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 00:47:59.592740   14284 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --alsologtostderr -v=1 --driver=docker: (1.8996861s)
--- PASS: TestFunctional/parallel/DryRun (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-965000 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.3176489s)

                                                
                                                
-- stdout --
	* [functional-965000] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:48:01.685249    6472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 00:48:01.807222    6472 out.go:291] Setting OutFile to fd 788 ...
	I0717 00:48:01.808358    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:48:01.808412    6472 out.go:304] Setting ErrFile to fd 864...
	I0717 00:48:01.808412    6472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 00:48:01.843569    6472 out.go:298] Setting JSON to false
	I0717 00:48:01.847965    6472 start.go:129] hostinfo: {"hostname":"minikube3","uptime":9296,"bootTime":1721167984,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0717 00:48:01.847965    6472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0717 00:48:01.853395    6472 out.go:177] * [functional-965000] minikube v1.33.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0717 00:48:01.857541    6472 notify.go:220] Checking for updates...
	I0717 00:48:01.859551    6472 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0717 00:48:01.864041    6472 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 00:48:01.867110    6472 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0717 00:48:01.871588    6472 out.go:177]   - MINIKUBE_LOCATION=19264
	I0717 00:48:01.873914    6472 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 00:48:01.877187    6472 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 00:48:01.878332    6472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0717 00:48:02.306044    6472 docker.go:123] docker version: linux-26.0.0:Docker Desktop 4.29.0 (145265)
	I0717 00:48:02.321827    6472 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 00:48:02.741116    6472 info.go:266] docker info: {ID:924ecda6-fdfd-44a1-a6d3-1c1814628cc9 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:87 SystemTime:2024-07-17 00:48:02.68763576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.146.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657614336 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:26.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.1-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1-desktop.1] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container. Vendor:Docker Inc. Version:0.0.27] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:
0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.23] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.4] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.1.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Do
cker Scout Vendor:Docker Inc. Version:v1.6.3]] Warnings:<nil>}}
	I0717 00:48:02.747699    6472 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 00:48:02.750748    6472 start.go:297] selected driver: docker
	I0717 00:48:02.750748    6472 start.go:901] validating driver "docker" against &{Name:functional-965000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-965000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0717 00:48:02.750970    6472 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 00:48:02.811276    6472 out.go:177] 
	W0717 00:48:02.814271    6472 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 00:48:02.817261    6472 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 status: (2.1226289s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (2.6719335s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 status -o json: (2.5109525s)
--- PASS: TestFunctional/parallel/StatusCmd (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (96.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c0f5142c-ffc2-469e-b7eb-75daaf5247cc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0134228s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-965000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-965000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cba81f7e-06c9-4014-b2b1-a57318ccb4e3] Pending
helpers_test.go:344: "sp-pod" [cba81f7e-06c9-4014-b2b1-a57318ccb4e3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cba81f7e-06c9-4014-b2b1-a57318ccb4e3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.0361792s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-965000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-965000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-965000 delete -f testdata/storage-provisioner/pod.yaml: (2.4954473s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-965000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8b99e6c4-6f0e-42c2-b9aa-845c186ef925] Pending
helpers_test.go:344: "sp-pod" [8b99e6c4-6f0e-42c2-b9aa-845c186ef925] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8b99e6c4-6f0e-42c2-b9aa-845c186ef925] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 55.0175525s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-965000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (96.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "echo hello": (1.3436108s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "cat /etc/hostname": (1.3632434s)
--- PASS: TestFunctional/parallel/SSHCmd (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cp testdata\cp-test.txt /home/docker/cp-test.txt: (1.0802311s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt": (1.2359558s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cp functional-965000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2462605891\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cp functional-965000:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd2462605891\001\cp-test.txt: (1.2522903s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /home/docker/cp-test.txt": (1.3010643s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (1.1483125s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh -n functional-965000 "sudo cat /tmp/does/not/exist/cp-test.txt": (1.3077018s)
--- PASS: TestFunctional/parallel/CpCmd (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (90.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-965000 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-kxbpf" [dcc9ccbd-cd48-4670-95a2-2e19427e29e0] Pending
helpers_test.go:344: "mysql-64454c8b5c-kxbpf" [dcc9ccbd-cd48-4670-95a2-2e19427e29e0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-kxbpf" [dcc9ccbd-cd48-4670-95a2-2e19427e29e0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 1m19.0108168s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;": exit status 1 (290.9497ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;": exit status 1 (332.2372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;": exit status 1 (343.5701ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;": exit status 1 (321.9584ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-965000 exec mysql-64454c8b5c-kxbpf -- mysql -ppassword -e "show databases;"
E0717 00:51:01.436853    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/MySQL (90.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7712/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/test/nested/copy/7712/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/test/nested/copy/7712/hosts": (1.6703824s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7712.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/7712.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/7712.pem": (1.7086792s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7712.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/7712.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/7712.pem": (1.7946578s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/51391683.0": (1.7863817s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/77122.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/77122.pem": (1.9774541s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/77122.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /usr/share/ca-certificates/77122.pem": (1.7760496s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (1.666593s)
--- PASS: TestFunctional/parallel/CertSync (10.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-965000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 ssh "sudo systemctl is-active crio": exit status 1 (1.6924927s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:46:52.609950    9884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (5.0400452s)
--- PASS: TestFunctional/parallel/License (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (27.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-965000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-965000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-twmmr" [c51f12bf-e019-4b63-b815-9264055208a3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-twmmr" [c51f12bf-e019-4b63-b815-9264055208a3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 27.0181899s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (27.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 version -o=json --components: (2.3851597s)
--- PASS: TestFunctional/parallel/Version/components (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-965000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-965000
docker.io/kicbase/echo-server:functional-965000
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-965000 image ls --format short --alsologtostderr:
W0717 00:48:22.369932    8156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0717 00:48:22.453098    8156 out.go:291] Setting OutFile to fd 732 ...
I0717 00:48:22.453588    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:22.453588    8156 out.go:304] Setting ErrFile to fd 896...
I0717 00:48:22.453588    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:22.475350    8156 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:22.476042    8156 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:22.500925    8156 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
I0717 00:48:22.714543    8156 ssh_runner.go:195] Run: systemctl --version
I0717 00:48:22.723559    8156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
I0717 00:48:22.919018    8156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
I0717 00:48:23.058956    8156 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-965000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-965000 | 598fb90257f57 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-965000 | 4c00c12cf540d | 30B    |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/kicbase/echo-server               | functional-965000 | 9056ab77afb8e | 4.94MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-965000 image ls --format table --alsologtostderr:
W0717 00:48:36.689167    6456 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0717 00:48:36.780405    6456 out.go:291] Setting OutFile to fd 880 ...
I0717 00:48:36.781087    6456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:36.781170    6456 out.go:304] Setting ErrFile to fd 408...
I0717 00:48:36.781170    6456 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:36.802104    6456 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:36.802701    6456 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:36.833578    6456 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
I0717 00:48:37.026287    6456 ssh_runner.go:195] Run: systemctl --version
I0717 00:48:37.036697    6456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
I0717 00:48:37.229131    6456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
I0717 00:48:37.393156    6456 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-965000 image ls --format json --alsologtostderr:
[{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-965000"],"size":"4940000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa47
50e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"598fb90257f579f2a9767f634c4d19c6374c6e76f7b94097ed1995064780c7a5","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-965000"],"size":"1240000"},{"id":"4c00c12cf540d4a8777252c96ad7088ec282657b4e72fa04cb6d46a677437478","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-965000"],"size":"30"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"r
epoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-965000 image ls --format json --alsologtostderr:
W0717 00:48:35.743419    9016 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0717 00:48:35.846923    9016 out.go:291] Setting OutFile to fd 980 ...
I0717 00:48:35.847925    9016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:35.848059    9016 out.go:304] Setting ErrFile to fd 764...
I0717 00:48:35.848112    9016 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:35.870969    9016 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:35.870969    9016 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:35.889976    9016 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
I0717 00:48:36.113435    9016 ssh_runner.go:195] Run: systemctl --version
I0717 00:48:36.122425    9016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
I0717 00:48:36.317768    9016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
I0717 00:48:36.470303    9016 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-965000 image ls --format yaml --alsologtostderr:
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 4c00c12cf540d4a8777252c96ad7088ec282657b4e72fa04cb6d46a677437478
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-965000
size: "30"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-965000
size: "4940000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-965000 image ls --format yaml --alsologtostderr:
W0717 00:48:23.279966   10544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0717 00:48:23.371466   10544 out.go:291] Setting OutFile to fd 624 ...
I0717 00:48:23.371952   10544 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:23.371952   10544 out.go:304] Setting ErrFile to fd 780...
I0717 00:48:23.371952   10544 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:23.396835   10544 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:23.397105   10544 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:23.418785   10544 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
I0717 00:48:23.631795   10544 ssh_runner.go:195] Run: systemctl --version
I0717 00:48:23.645764   10544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
I0717 00:48:23.850577   10544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
I0717 00:48:24.021345   10544 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (11.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 ssh pgrep buildkitd: exit status 1 (1.238597s)

                                                
                                                
** stderr ** 
	W0717 00:48:24.228112    7356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image build -t localhost/my-image:functional-965000 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image build -t localhost/my-image:functional-965000 testdata\build --alsologtostderr: (9.3613537s)
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-965000 image build -t localhost/my-image:functional-965000 testdata\build --alsologtostderr:
W0717 00:48:25.465957    2540 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0717 00:48:25.560306    2540 out.go:291] Setting OutFile to fd 992 ...
I0717 00:48:25.578858    2540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:25.578858    2540 out.go:304] Setting ErrFile to fd 400...
I0717 00:48:25.578958    2540 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 00:48:25.595298    2540 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:25.613599    2540 config.go:182] Loaded profile config "functional-965000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0717 00:48:25.640562    2540 cli_runner.go:164] Run: docker container inspect functional-965000 --format={{.State.Status}}
I0717 00:48:25.837760    2540 ssh_runner.go:195] Run: systemctl --version
I0717 00:48:25.849648    2540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-965000
I0717 00:48:26.037361    2540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63089 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-965000\id_rsa Username:docker}
I0717 00:48:26.179686    2540 build_images.go:161] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.463312147.tar
I0717 00:48:26.200424    2540 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 00:48:26.243656    2540 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.463312147.tar
I0717 00:48:26.255201    2540 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.463312147.tar: stat -c "%s %y" /var/lib/minikube/build/build.463312147.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.463312147.tar': No such file or directory
I0717 00:48:26.255860    2540 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.463312147.tar --> /var/lib/minikube/build/build.463312147.tar (3072 bytes)
I0717 00:48:26.324439    2540 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.463312147
I0717 00:48:26.360525    2540 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.463312147 -xf /var/lib/minikube/build/build.463312147.tar
I0717 00:48:26.394874    2540 docker.go:360] Building image: /var/lib/minikube/build/build.463312147
I0717 00:48:26.409897    2540 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-965000 /var/lib/minikube/build/build.463312147
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 4.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:598fb90257f579f2a9767f634c4d19c6374c6e76f7b94097ed1995064780c7a5
#8 writing image sha256:598fb90257f579f2a9767f634c4d19c6374c6e76f7b94097ed1995064780c7a5 0.0s done
#8 naming to localhost/my-image:functional-965000 0.0s done
#8 DONE 0.2s
I0717 00:48:34.558271    2540 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-965000 /var/lib/minikube/build/build.463312147: (8.1483076s)
I0717 00:48:34.574101    2540 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.463312147
I0717 00:48:34.615878    2540 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.463312147.tar
I0717 00:48:34.678965    2540 build_images.go:217] Built localhost/my-image:functional-965000 from C:\Users\jenkins.minikube3\AppData\Local\Temp\build.463312147.tar
I0717 00:48:34.679187    2540 build_images.go:133] succeeded building to: functional-965000
I0717 00:48:34.679187    2540 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (11.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (2.8089038s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-965000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr: (4.6884137s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image ls: (1.4753962s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.6207549s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.0253615s)
functional_test.go:1311: Took "2.0254997s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "323.6376ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (2.0838852s)
functional_test.go:1362: Took "2.0858586s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "303.249ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr: (2.342352s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image ls: (1.1709723s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-965000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-965000"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-965000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-965000": (7.3642658s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-965000 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-965000 docker-env | Invoke-Expression ; docker images": (4.221882s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:234: (dbg) Done: docker pull docker.io/kicbase/echo-server:latest: (1.1192627s)
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-965000
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image load --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr: (2.4240152s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image ls: (1.2235512s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image save docker.io/kicbase/echo-server:functional-965000 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image save docker.io/kicbase/echo-server:functional-965000 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (2.0360494s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image rm docker.io/kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.5263944s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-965000
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 image save --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 image save --daemon docker.io/kicbase/echo-server:functional-965000 --alsologtostderr: (1.9335195s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-965000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 15264: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4968: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 service list: (2.7482983s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (35.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-965000 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4fcc49b5-57cf-4000-998d-8df555b7267e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4fcc49b5-57cf-4000-998d-8df555b7267e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 34.0593933s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (35.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-965000 service list -o json: (2.202939s)
functional_test.go:1490: Took "2.202939s" to run "out/minikube-windows-amd64.exe -p functional-965000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 service --namespace=default --https --url hello-node: exit status 1 (15.0267386s)

                                                
                                                
-- stdout --
	https://127.0.0.1:63396

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:47:25.155336    8368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1518: found endpoint: https://127.0.0.1:63396
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 service hello-node --url --format={{.IP}}: exit status 1 (15.0419754s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:47:40.184289    3400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-965000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-965000 service hello-node --url: exit status 1 (15.0185222s)

                                                
                                                
-- stdout --
	http://127.0.0.1:63466

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 00:47:55.277282    9360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1561: found endpoint for hello-node: http://127.0.0.1:63466
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-965000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-965000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 14644: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 5948: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.41s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-965000
--- PASS: TestFunctional/delete_echo-server_images (0.41s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-965000
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-965000
--- PASS: TestFunctional/delete_minikube_cached_images (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (279.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-100400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0717 00:56:01.435430    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:56:53.082975    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.097839    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.113230    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.144502    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.191469    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.282918    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.456287    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:53.783593    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:54.439392    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:55.734453    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:56:58.294887    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:57:03.426141    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:57:13.668169    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 00:57:24.617415    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 00:57:34.150700    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-100400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (4m35.9695911s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (3.7310007s)
--- PASS: TestMultiControlPlane/serial/StartCluster (279.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (28.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-100400 -- rollout status deployment/busybox: (18.745347s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- nslookup kubernetes.io: (1.7512039s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- nslookup kubernetes.io: (1.5822976s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- nslookup kubernetes.io
E0717 00:58:15.124821    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- nslookup kubernetes.io: (1.6033294s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (28.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-86lqr -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-csqtr -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-100400 -- exec busybox-fc5497c4f-d257p -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (75.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-100400 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-100400 -v=7 --alsologtostderr: (1m10.7857907s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
E0717 00:59:37.047168    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (4.5277162s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (75.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-100400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (3.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.5953748s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (3.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (75.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status --output json -v=7 --alsologtostderr: (4.4807269s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400:/home/docker/cp-test.txt: (1.3253327s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt": (1.2536808s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400.txt: (1.2263496s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt": (1.235527s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400_ha-100400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400_ha-100400-m02.txt: (1.8013239s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt": (1.2537333s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m02.txt": (1.2417215s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400_ha-100400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400_ha-100400-m03.txt: (1.8365431s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt": (1.2877806s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m03.txt": (1.2317263s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400_ha-100400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400_ha-100400-m04.txt: (1.8097572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test.txt": (1.1717713s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400_ha-100400-m04.txt": (1.269221s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m02:/home/docker/cp-test.txt: (1.247607s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt": (1.240171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m02.txt: (1.2340333s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt": (1.2536437s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m02_ha-100400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m02_ha-100400.txt: (1.7955773s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt": (1.2478496s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400.txt": (1.1899273s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400-m02_ha-100400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400-m02_ha-100400-m03.txt: (1.8246358s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt": (1.226146s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400-m03.txt": (1.2238616s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400-m02_ha-100400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m02:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400-m02_ha-100400-m04.txt: (1.8555503s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test.txt": (1.2452616s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400-m02_ha-100400-m04.txt": (1.2564006s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m03:/home/docker/cp-test.txt: (1.236001s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt": (1.2400937s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m03.txt: (1.2317545s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt": (1.1895016s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m03_ha-100400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m03_ha-100400.txt: (1.8079101s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt": (1.2377376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400.txt": (1.2259999s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400-m03_ha-100400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400-m03_ha-100400-m02.txt: (1.7750223s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt": (1.1865708s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400-m02.txt": (1.2279929s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400-m03_ha-100400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m03:/home/docker/cp-test.txt ha-100400-m04:/home/docker/cp-test_ha-100400-m03_ha-100400-m04.txt: (1.768451s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test.txt": (1.2367433s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test_ha-100400-m03_ha-100400-m04.txt": (1.1909273s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp testdata\cp-test.txt ha-100400-m04:/home/docker/cp-test.txt: (1.2437359s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt": (1.1672621s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2229517196\001\cp-test_ha-100400-m04.txt: (1.1976893s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt": (1.200546s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m04_ha-100400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400:/home/docker/cp-test_ha-100400-m04_ha-100400.txt: (1.7643323s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt": (1.2085005s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400.txt": (1.1973852s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400-m04_ha-100400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400-m02:/home/docker/cp-test_ha-100400-m04_ha-100400-m02.txt: (1.8037888s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt": (1.2202535s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m02 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400-m02.txt": (1.1797194s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400-m04_ha-100400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 cp ha-100400-m04:/home/docker/cp-test.txt ha-100400-m03:/home/docker/cp-test_ha-100400-m04_ha-100400-m03.txt: (1.8104158s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m04 "sudo cat /home/docker/cp-test.txt": (1.215745s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 ssh -n ha-100400-m03 "sudo cat /home/docker/cp-test_ha-100400-m04_ha-100400-m03.txt": (1.2122598s)
--- PASS: TestMultiControlPlane/serial/CopyFile (75.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (15.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 node stop m02 -v=7 --alsologtostderr
E0717 01:01:01.434686    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 node stop m02 -v=7 --alsologtostderr: (12.2714034s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: exit status 7 (3.590327s)

                                                
                                                
-- stdout --
	ha-100400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-100400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-100400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-100400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:01:09.919496    8244 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 01:01:10.004767    8244 out.go:291] Setting OutFile to fd 620 ...
	I0717 01:01:10.005956    8244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:10.005956    8244 out.go:304] Setting ErrFile to fd 800...
	I0717 01:01:10.005956    8244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:01:10.022453    8244 out.go:298] Setting JSON to false
	I0717 01:01:10.022453    8244 mustload.go:65] Loading cluster: ha-100400
	I0717 01:01:10.023132    8244 notify.go:220] Checking for updates...
	I0717 01:01:10.023850    8244 config.go:182] Loaded profile config "ha-100400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 01:01:10.023973    8244 status.go:255] checking status of ha-100400 ...
	I0717 01:01:10.046372    8244 cli_runner.go:164] Run: docker container inspect ha-100400 --format={{.State.Status}}
	I0717 01:01:10.235444    8244 status.go:330] ha-100400 host status = "Running" (err=<nil>)
	I0717 01:01:10.235530    8244 host.go:66] Checking if "ha-100400" exists ...
	I0717 01:01:10.246866    8244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-100400
	I0717 01:01:10.423686    8244 host.go:66] Checking if "ha-100400" exists ...
	I0717 01:01:10.439126    8244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:10.448994    8244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-100400
	I0717 01:01:10.661661    8244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63548 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-100400\id_rsa Username:docker}
	I0717 01:01:10.802869    8244 ssh_runner.go:195] Run: systemctl --version
	I0717 01:01:10.827028    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:01:10.865294    8244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-100400
	I0717 01:01:11.052595    8244 kubeconfig.go:125] found "ha-100400" server: "https://127.0.0.1:63552"
	I0717 01:01:11.052595    8244 api_server.go:166] Checking apiserver status ...
	I0717 01:01:11.067736    8244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:01:11.110986    8244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2669/cgroup
	I0717 01:01:11.136052    8244 api_server.go:182] apiserver freezer: "7:freezer:/docker/52f7aa9507c618bedf864ec2d8ff9f037deaa3f9e2ab9a9ace04cc714985cc4d/kubepods/burstable/pod827e1f8ac41d9f33bef0e2081dec02ef/15cd5d219cf71177b5ef9ab0005f3ddf9b6eee19750ec3c80f167a3c8266aa08"
	I0717 01:01:11.149548    8244 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/52f7aa9507c618bedf864ec2d8ff9f037deaa3f9e2ab9a9ace04cc714985cc4d/kubepods/burstable/pod827e1f8ac41d9f33bef0e2081dec02ef/15cd5d219cf71177b5ef9ab0005f3ddf9b6eee19750ec3c80f167a3c8266aa08/freezer.state
	I0717 01:01:11.170208    8244 api_server.go:204] freezer state: "THAWED"
	I0717 01:01:11.170208    8244 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63552/healthz ...
	I0717 01:01:11.186644    8244 api_server.go:279] https://127.0.0.1:63552/healthz returned 200:
	ok
	I0717 01:01:11.186956    8244 status.go:422] ha-100400 apiserver status = Running (err=<nil>)
	I0717 01:01:11.186956    8244 status.go:257] ha-100400 status: &{Name:ha-100400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:01:11.187048    8244 status.go:255] checking status of ha-100400-m02 ...
	I0717 01:01:11.207792    8244 cli_runner.go:164] Run: docker container inspect ha-100400-m02 --format={{.State.Status}}
	I0717 01:01:11.392131    8244 status.go:330] ha-100400-m02 host status = "Stopped" (err=<nil>)
	I0717 01:01:11.392302    8244 status.go:343] host is not running, skipping remaining checks
	I0717 01:01:11.392355    8244 status.go:257] ha-100400-m02 status: &{Name:ha-100400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:01:11.392355    8244 status.go:255] checking status of ha-100400-m03 ...
	I0717 01:01:11.413362    8244 cli_runner.go:164] Run: docker container inspect ha-100400-m03 --format={{.State.Status}}
	I0717 01:01:11.598829    8244 status.go:330] ha-100400-m03 host status = "Running" (err=<nil>)
	I0717 01:01:11.598829    8244 host.go:66] Checking if "ha-100400-m03" exists ...
	I0717 01:01:11.610176    8244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-100400-m03
	I0717 01:01:11.799089    8244 host.go:66] Checking if "ha-100400-m03" exists ...
	I0717 01:01:11.813144    8244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:11.821597    8244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-100400-m03
	I0717 01:01:12.017795    8244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63674 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-100400-m03\id_rsa Username:docker}
	I0717 01:01:12.191103    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:01:12.228605    8244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-100400
	I0717 01:01:12.407124    8244 kubeconfig.go:125] found "ha-100400" server: "https://127.0.0.1:63552"
	I0717 01:01:12.407124    8244 api_server.go:166] Checking apiserver status ...
	I0717 01:01:12.420462    8244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:01:12.467236    8244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2492/cgroup
	I0717 01:01:12.489261    8244 api_server.go:182] apiserver freezer: "7:freezer:/docker/110f1f8d80797b0f2b89cb44aff805a1d0f988b22d3646aec2508aee8687166d/kubepods/burstable/podea3bc4d6df010e92a61b19ba6490d44d/47a5a3a8ea1d19fe25f5bf8fad09682bc3b96bb4bf5fbd8dc129a6a998f8dc46"
	I0717 01:01:12.500236    8244 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/110f1f8d80797b0f2b89cb44aff805a1d0f988b22d3646aec2508aee8687166d/kubepods/burstable/podea3bc4d6df010e92a61b19ba6490d44d/47a5a3a8ea1d19fe25f5bf8fad09682bc3b96bb4bf5fbd8dc129a6a998f8dc46/freezer.state
	I0717 01:01:12.522684    8244 api_server.go:204] freezer state: "THAWED"
	I0717 01:01:12.522761    8244 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63552/healthz ...
	I0717 01:01:12.537334    8244 api_server.go:279] https://127.0.0.1:63552/healthz returned 200:
	ok
	I0717 01:01:12.538160    8244 status.go:422] ha-100400-m03 apiserver status = Running (err=<nil>)
	I0717 01:01:12.538255    8244 status.go:257] ha-100400-m03 status: &{Name:ha-100400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:01:12.538255    8244 status.go:255] checking status of ha-100400-m04 ...
	I0717 01:01:12.560760    8244 cli_runner.go:164] Run: docker container inspect ha-100400-m04 --format={{.State.Status}}
	I0717 01:01:12.751060    8244 status.go:330] ha-100400-m04 host status = "Running" (err=<nil>)
	I0717 01:01:12.751060    8244 host.go:66] Checking if "ha-100400-m04" exists ...
	I0717 01:01:12.767195    8244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-100400-m04
	I0717 01:01:12.953337    8244 host.go:66] Checking if "ha-100400-m04" exists ...
	I0717 01:01:12.970409    8244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:01:12.981643    8244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-100400-m04
	I0717 01:01:13.155222    8244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63814 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-100400-m04\id_rsa Username:docker}
	I0717 01:01:13.326773    8244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:01:13.356189    8244 status.go:257] ha-100400-m04 status: &{Name:ha-100400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (15.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8254946s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (67.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 node start m02 -v=7 --alsologtostderr
E0717 01:01:53.082190    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 node start m02 -v=7 --alsologtostderr: (1m3.0696521s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
E0717 01:02:20.890215    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (4.5771994s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (67.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.7142846s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-100400 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-100400 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-100400 -v=7 --alsologtostderr: (40.0630225s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-100400 --wait=true -v=7 --alsologtostderr
E0717 01:06:01.449272    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 01:06:53.091280    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-100400 --wait=true -v=7 --alsologtostderr: (4m20.6803667s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-100400
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (301.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (24.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 node delete m03 -v=7 --alsologtostderr: (20.0650117s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (3.3141312s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (24.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.5697695s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (38.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 stop -v=7 --alsologtostderr: (37.6315131s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: exit status 7 (810.9476ms)

                                                
                                                
-- stdout --
	ha-100400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-100400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-100400-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:08:33.372329   13536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 01:08:33.448172   13536 out.go:291] Setting OutFile to fd 688 ...
	I0717 01:08:33.449016   13536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:08:33.449016   13536 out.go:304] Setting ErrFile to fd 800...
	I0717 01:08:33.449105   13536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:08:33.463304   13536 out.go:298] Setting JSON to false
	I0717 01:08:33.464313   13536 mustload.go:65] Loading cluster: ha-100400
	I0717 01:08:33.464451   13536 notify.go:220] Checking for updates...
	I0717 01:08:33.465137   13536 config.go:182] Loaded profile config "ha-100400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 01:08:33.465282   13536 status.go:255] checking status of ha-100400 ...
	I0717 01:08:33.487208   13536 cli_runner.go:164] Run: docker container inspect ha-100400 --format={{.State.Status}}
	I0717 01:08:33.666325   13536 status.go:330] ha-100400 host status = "Stopped" (err=<nil>)
	I0717 01:08:33.666390   13536 status.go:343] host is not running, skipping remaining checks
	I0717 01:08:33.666390   13536 status.go:257] ha-100400 status: &{Name:ha-100400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:08:33.666390   13536 status.go:255] checking status of ha-100400-m02 ...
	I0717 01:08:33.687367   13536 cli_runner.go:164] Run: docker container inspect ha-100400-m02 --format={{.State.Status}}
	I0717 01:08:33.852730   13536 status.go:330] ha-100400-m02 host status = "Stopped" (err=<nil>)
	I0717 01:08:33.852730   13536 status.go:343] host is not running, skipping remaining checks
	I0717 01:08:33.852730   13536 status.go:257] ha-100400-m02 status: &{Name:ha-100400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:08:33.852730   13536 status.go:255] checking status of ha-100400-m04 ...
	I0717 01:08:33.872113   13536 cli_runner.go:164] Run: docker container inspect ha-100400-m04 --format={{.State.Status}}
	I0717 01:08:34.055458   13536 status.go:330] ha-100400-m04 host status = "Stopped" (err=<nil>)
	I0717 01:08:34.055458   13536 status.go:343] host is not running, skipping remaining checks
	I0717 01:08:34.055458   13536 status.go:257] ha-100400-m04 status: &{Name:ha-100400-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (38.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (131.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-100400 --wait=true -v=7 --alsologtostderr --driver=docker
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-100400 --wait=true -v=7 --alsologtostderr --driver=docker: (2m8.2596643s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (3.2668682s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (131.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.6323127s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (89.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-100400 --control-plane -v=7 --alsologtostderr
E0717 01:11:01.441727    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 01:11:53.086694    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-100400 --control-plane -v=7 --alsologtostderr: (1m24.6634525s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-100400 status -v=7 --alsologtostderr: (4.4660881s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (89.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.6105889s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (73.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-669900 --driver=docker
E0717 01:13:16.266198    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-669900 --driver=docker: (1m13.5135202s)
--- PASS: TestImageBuild/serial/Setup (73.51s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-669900
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-669900: (4.2839818s)
--- PASS: TestImageBuild/serial/NormalBuild (4.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-669900
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-669900: (2.63138s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-669900
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-669900: (1.7079272s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.71s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-669900
E0717 01:14:04.641318    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-669900: (2.0919109s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-257100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-257100 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m30.1201721s)
--- PASS: TestJSONOutput/start/Command (90.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-257100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-257100 --output=json --user=testUser: (1.752978s)
--- PASS: TestJSONOutput/pause/Command (1.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-257100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-257100 --output=json --user=testUser: (1.6559908s)
--- PASS: TestJSONOutput/unpause/Command (1.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.01s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-257100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-257100 --output=json --user=testUser: (13.0118793s)
--- PASS: TestJSONOutput/stop/Command (13.01s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.48s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-205900 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-205900 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (289.2207ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"72596106-9a7a-4622-a00e-4635428c5806","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-205900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdab424b-e8e9-41f7-9455-fbcc8d7b5c7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"39441b72-53f5-4097-ba78-277f25bd23ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b68d5b1-84cf-4fac-b794-ebc6d174e189","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"663fe98c-7a66-413e-9411-d6563217408d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19264"}}
	{"specversion":"1.0","id":"d1cfa045-0fbf-4f7b-bf9d-c3c40bcdfa82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1d2b4db1-2b6f-4e24-a50f-34d4e4eec356","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:16:02.073323   14664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-205900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-205900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-205900: (1.1862574s)
--- PASS: TestErrorJSONOutput (1.48s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (83.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-883800 --network=
E0717 01:16:53.092104    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-883800 --network=: (1m17.3439335s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-883800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-883800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-883800: (5.556748s)
--- PASS: TestKicCustomNetwork/create_custom_network (83.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (87.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-406200 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-406200 --network=bridge: (1m22.6321087s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-406200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-406200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-406200: (4.2273331s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (87.04s)

                                                
                                    
x
+
TestKicExistingNetwork (87.35s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-186400 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-186400 --network=existing-network: (1m21.1666863s)
helpers_test.go:175: Cleaning up "existing-network-186400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-186400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-186400: (4.7678245s)
--- PASS: TestKicExistingNetwork (87.35s)

                                                
                                    
x
+
TestKicCustomSubnet (80.88s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-182200 --subnet=192.168.60.0/24
E0717 01:21:01.443988    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-182200 --subnet=192.168.60.0/24: (1m15.0610867s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-182200 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-182200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-182200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-182200: (5.6078773s)
--- PASS: TestKicCustomSubnet (80.88s)

                                                
                                    
x
+
TestKicStaticIP (82.4s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-755200 --static-ip=192.168.200.200
E0717 01:21:53.098280    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-755200 --static-ip=192.168.200.200: (1m16.5877565s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-755200 ip
helpers_test.go:175: Cleaning up "static-ip-755200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-755200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-755200: (5.1446676s)
--- PASS: TestKicStaticIP (82.40s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (158.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-455100 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-455100 --driver=docker: (1m11.2494764s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-455100 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-455100 --driver=docker: (1m8.7388369s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-455100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.7927703s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-455100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (2.643965s)
helpers_test.go:175: Cleaning up "second-455100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-455100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-455100: (5.8501303s)
helpers_test.go:175: Cleaning up "first-455100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-455100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-455100: (6.0393616s)
--- PASS: TestMinikubeProfile (158.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (32.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-935400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
E0717 01:26:01.462151    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-935400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (31.0684835s)
--- PASS: TestMountStart/serial/StartWithMountFirst (32.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-935400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-935400 ssh -- ls /minikube-host: (1.2054573s)
--- PASS: TestMountStart/serial/VerifyMountFirst (1.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-935400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-935400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (30.3919402s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host: (1.16986s)
--- PASS: TestMountStart/serial/VerifyMountSecond (1.18s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (4.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-935400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-935400 --alsologtostderr -v=5: (4.1374724s)
--- PASS: TestMountStart/serial/DeleteFirst (4.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host
E0717 01:26:53.090898    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host: (1.1853874s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (1.19s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.63s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-935400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-935400: (2.6302457s)
--- PASS: TestMountStart/serial/Stop (2.63s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.6s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-935400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-935400: (22.5928612s)
--- PASS: TestMountStart/serial/RestartStopped (23.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-935400 ssh -- ls /minikube-host: (1.2483337s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (1.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (179.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
E0717 01:29:56.290003    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m56.5446539s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: (2.5123214s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (179.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (35.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- rollout status deployment/busybox
E0717 01:30:44.659074    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- rollout status deployment/busybox: (28.3498749s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- nslookup kubernetes.io: (1.8482514s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- nslookup kubernetes.io: (1.5597461s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- nslookup kubernetes.default
E0717 01:31:01.460553    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (35.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-lq5wc -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-861100 -- exec busybox-fc5497c4f-s2jf9 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (68.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-861100 -v 3 --alsologtostderr
E0717 01:31:53.106664    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-861100 -v 3 --alsologtostderr: (1m5.6342547s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: (3.2110881s)
--- PASS: TestMultiNode/serial/AddNode (68.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-861100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4853792s)
--- PASS: TestMultiNode/serial/ProfileList (1.49s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (41.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status --output json --alsologtostderr: (2.9020047s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100:/home/docker/cp-test.txt: (1.1434831s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt": (1.2497975s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100.txt: (1.1649548s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt": (1.1818417s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt multinode-861100-m02:/home/docker/cp-test_multinode-861100_multinode-861100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt multinode-861100-m02:/home/docker/cp-test_multinode-861100_multinode-861100-m02.txt: (1.7522747s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt": (1.1878789s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test_multinode-861100_multinode-861100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test_multinode-861100_multinode-861100-m02.txt": (1.1929947s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt multinode-861100-m03:/home/docker/cp-test_multinode-861100_multinode-861100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100:/home/docker/cp-test.txt multinode-861100-m03:/home/docker/cp-test_multinode-861100_multinode-861100-m03.txt: (1.7431617s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test.txt": (1.1721251s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test_multinode-861100_multinode-861100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test_multinode-861100_multinode-861100-m03.txt": (1.2075791s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100-m02:/home/docker/cp-test.txt: (1.1573421s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt": (1.1950467s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100-m02.txt: (1.1747136s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt": (1.1973112s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt multinode-861100:/home/docker/cp-test_multinode-861100-m02_multinode-861100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt multinode-861100:/home/docker/cp-test_multinode-861100-m02_multinode-861100.txt: (1.7457787s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt": (1.1920974s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test_multinode-861100-m02_multinode-861100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test_multinode-861100-m02_multinode-861100.txt": (1.1787807s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt multinode-861100-m03:/home/docker/cp-test_multinode-861100-m02_multinode-861100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m02:/home/docker/cp-test.txt multinode-861100-m03:/home/docker/cp-test_multinode-861100-m02_multinode-861100-m03.txt: (1.7785558s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test.txt": (1.2021913s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test_multinode-861100-m02_multinode-861100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test_multinode-861100-m02_multinode-861100-m03.txt": (1.2308528s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp testdata\cp-test.txt multinode-861100-m03:/home/docker/cp-test.txt: (1.1910508s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt": (1.1621617s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile590319710\001\cp-test_multinode-861100-m03.txt: (1.1578958s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt": (1.1708284s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt multinode-861100:/home/docker/cp-test_multinode-861100-m03_multinode-861100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt multinode-861100:/home/docker/cp-test_multinode-861100-m03_multinode-861100.txt: (1.7587076s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt": (1.168696s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test_multinode-861100-m03_multinode-861100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100 "sudo cat /home/docker/cp-test_multinode-861100-m03_multinode-861100.txt": (1.1612211s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt multinode-861100-m02:/home/docker/cp-test_multinode-861100-m03_multinode-861100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 cp multinode-861100-m03:/home/docker/cp-test.txt multinode-861100-m02:/home/docker/cp-test_multinode-861100-m03_multinode-861100-m02.txt: (1.7214832s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m03 "sudo cat /home/docker/cp-test.txt": (1.1828524s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test_multinode-861100-m03_multinode-861100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 ssh -n multinode-861100-m02 "sudo cat /home/docker/cp-test_multinode-861100-m03_multinode-861100-m02.txt": (1.1839036s)
--- PASS: TestMultiNode/serial/CopyFile (41.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (6.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 node stop m03: (2.17223s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-861100 status: exit status 7 (2.3195306s)

                                                
                                                
-- stdout --
	multinode-861100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-861100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-861100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:33:00.403740    7472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: exit status 7 (2.3778262s)

                                                
                                                
-- stdout --
	multinode-861100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-861100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-861100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:33:02.723526    3968 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 01:33:02.803514    3968 out.go:291] Setting OutFile to fd 944 ...
	I0717 01:33:02.804229    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:33:02.804229    3968 out.go:304] Setting ErrFile to fd 316...
	I0717 01:33:02.804229    3968 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:33:02.824527    3968 out.go:298] Setting JSON to false
	I0717 01:33:02.824527    3968 mustload.go:65] Loading cluster: multinode-861100
	I0717 01:33:02.824527    3968 notify.go:220] Checking for updates...
	I0717 01:33:02.825454    3968 config.go:182] Loaded profile config "multinode-861100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 01:33:02.825454    3968 status.go:255] checking status of multinode-861100 ...
	I0717 01:33:02.845455    3968 cli_runner.go:164] Run: docker container inspect multinode-861100 --format={{.State.Status}}
	I0717 01:33:03.040527    3968 status.go:330] multinode-861100 host status = "Running" (err=<nil>)
	I0717 01:33:03.040660    3968 host.go:66] Checking if "multinode-861100" exists ...
	I0717 01:33:03.051156    3968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-861100
	I0717 01:33:03.242529    3968 host.go:66] Checking if "multinode-861100" exists ...
	I0717 01:33:03.255716    3968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:33:03.264718    3968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-861100
	I0717 01:33:03.446087    3968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64989 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-861100\id_rsa Username:docker}
	I0717 01:33:03.599417    3968 ssh_runner.go:195] Run: systemctl --version
	I0717 01:33:03.626129    3968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:03.663970    3968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-861100
	I0717 01:33:03.850790    3968 kubeconfig.go:125] found "multinode-861100" server: "https://127.0.0.1:64988"
	I0717 01:33:03.850790    3968 api_server.go:166] Checking apiserver status ...
	I0717 01:33:03.864687    3968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 01:33:03.906111    3968 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2617/cgroup
	I0717 01:33:03.933013    3968 api_server.go:182] apiserver freezer: "7:freezer:/docker/14cf43c6742c2c03dbace69876d42202a6c05d7f7e9c14b79074c4a77d21d96c/kubepods/burstable/pod377cc00cafd25eb962d084209cd2b9f6/5dde0dc2a8da0e3171a5018deea52ee879775421958872c49e28775e6c3ceb03"
	I0717 01:33:03.946536    3968 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/14cf43c6742c2c03dbace69876d42202a6c05d7f7e9c14b79074c4a77d21d96c/kubepods/burstable/pod377cc00cafd25eb962d084209cd2b9f6/5dde0dc2a8da0e3171a5018deea52ee879775421958872c49e28775e6c3ceb03/freezer.state
	I0717 01:33:03.967859    3968 api_server.go:204] freezer state: "THAWED"
	I0717 01:33:03.967859    3968 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:64988/healthz ...
	I0717 01:33:03.980251    3968 api_server.go:279] https://127.0.0.1:64988/healthz returned 200:
	ok
	I0717 01:33:03.980291    3968 status.go:422] multinode-861100 apiserver status = Running (err=<nil>)
	I0717 01:33:03.980291    3968 status.go:257] multinode-861100 status: &{Name:multinode-861100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:33:03.980351    3968 status.go:255] checking status of multinode-861100-m02 ...
	I0717 01:33:03.998738    3968 cli_runner.go:164] Run: docker container inspect multinode-861100-m02 --format={{.State.Status}}
	I0717 01:33:04.182339    3968 status.go:330] multinode-861100-m02 host status = "Running" (err=<nil>)
	I0717 01:33:04.182374    3968 host.go:66] Checking if "multinode-861100-m02" exists ...
	I0717 01:33:04.194689    3968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-861100-m02
	I0717 01:33:04.367576    3968 host.go:66] Checking if "multinode-861100-m02" exists ...
	I0717 01:33:04.385477    3968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 01:33:04.396767    3968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-861100-m02
	I0717 01:33:04.569965    3968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65043 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-861100-m02\id_rsa Username:docker}
	I0717 01:33:04.716436    3968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 01:33:04.742100    3968 status.go:257] multinode-861100-m02 status: &{Name:multinode-861100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:33:04.742218    3968 status.go:255] checking status of multinode-861100-m03 ...
	I0717 01:33:04.761208    3968 cli_runner.go:164] Run: docker container inspect multinode-861100-m03 --format={{.State.Status}}
	I0717 01:33:04.960754    3968 status.go:330] multinode-861100-m03 host status = "Stopped" (err=<nil>)
	I0717 01:33:04.960863    3968 status.go:343] host is not running, skipping remaining checks
	I0717 01:33:04.960922    3968 status.go:257] multinode-861100-m03 status: &{Name:multinode-861100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (6.87s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (27.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 node start m03 -v=7 --alsologtostderr: (23.758167s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status -v=7 --alsologtostderr: (3.1037562s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (27.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (164.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-861100
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-861100
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-861100: (26.2158517s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true -v=8 --alsologtostderr
E0717 01:36:01.456761    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true -v=8 --alsologtostderr: (2m18.2846516s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-861100
--- PASS: TestMultiNode/serial/RestartKeepsNodes (164.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (13.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 node delete m03: (11.2343055s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: (2.1604581s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (13.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 stop
E0717 01:36:53.110927    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 stop: (24.2845786s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-861100 status: exit status 7 (604.0205ms)

                                                
                                                
-- stdout --
	multinode-861100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-861100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:36:55.535506    1216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: exit status 7 (621.7778ms)

                                                
                                                
-- stdout --
	multinode-861100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-861100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:36:56.143204    7296 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0717 01:36:56.223737    7296 out.go:291] Setting OutFile to fd 756 ...
	I0717 01:36:56.224550    7296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:36:56.224619    7296 out.go:304] Setting ErrFile to fd 780...
	I0717 01:36:56.224619    7296 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0717 01:36:56.238586    7296 out.go:298] Setting JSON to false
	I0717 01:36:56.238586    7296 mustload.go:65] Loading cluster: multinode-861100
	I0717 01:36:56.239367    7296 notify.go:220] Checking for updates...
	I0717 01:36:56.239674    7296 config.go:182] Loaded profile config "multinode-861100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0717 01:36:56.239674    7296 status.go:255] checking status of multinode-861100 ...
	I0717 01:36:56.262147    7296 cli_runner.go:164] Run: docker container inspect multinode-861100 --format={{.State.Status}}
	I0717 01:36:56.450061    7296 status.go:330] multinode-861100 host status = "Stopped" (err=<nil>)
	I0717 01:36:56.450061    7296 status.go:343] host is not running, skipping remaining checks
	I0717 01:36:56.450061    7296 status.go:257] multinode-861100 status: &{Name:multinode-861100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 01:36:56.450061    7296 status.go:255] checking status of multinode-861100-m02 ...
	I0717 01:36:56.468961    7296 cli_runner.go:164] Run: docker container inspect multinode-861100-m02 --format={{.State.Status}}
	I0717 01:36:56.636800    7296 status.go:330] multinode-861100-m02 host status = "Stopped" (err=<nil>)
	I0717 01:36:56.636800    7296 status.go:343] host is not running, skipping remaining checks
	I0717 01:36:56.636800    7296 status.go:257] multinode-861100-m02 status: &{Name:multinode-861100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (80.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true -v=8 --alsologtostderr --driver=docker
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-861100 --wait=true -v=8 --alsologtostderr --driver=docker: (1m17.5761492s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-861100 status --alsologtostderr: (2.1888479s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (80.19s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (76.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-861100
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-861100-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-861100-m02 --driver=docker: exit status 14 (274.9499ms)

                                                
                                                
-- stdout --
	* [multinode-861100-m02] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:38:17.197735    9552 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Profile name 'multinode-861100-m02' is duplicated with machine name 'multinode-861100-m02' in profile 'multinode-861100'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-861100-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-861100-m03 --driver=docker: (1m9.4947267s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-861100
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-861100: exit status 80 (1.1635012s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-861100 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:39:26.975511    1948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-861100-m03 already exists in multinode-861100-m03 profile
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                      │
	│    * If the above advice does not help, please let us know:                                                          │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                        │
	│                                                                                                                      │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                             │
	│    * Please also attach the following file to the GitHub issue:                                                      │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_node_2bbdfd0e0a46af455ae5a771b1270736051e61d9_7.log    │
	│                                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-861100-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-861100-m03: (5.6893338s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (76.87s)

                                                
                                    
x
+
TestPreload (197.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-193900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0717 01:41:01.462690    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 01:41:53.106644    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-193900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (2m11.7710332s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-193900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-193900 image pull gcr.io/k8s-minikube/busybox: (2.4035792s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-193900
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-193900: (12.378274s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-193900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-193900 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (44.6595806s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-193900 image list
helpers_test.go:175: Cleaning up "test-preload-193900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-193900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-193900: (5.4468108s)
--- PASS: TestPreload (197.57s)

                                                
                                    
x
+
TestScheduledStopWindows (141.58s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-520200 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-520200 --memory=2048 --driver=docker: (1m10.8682541s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-520200 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-520200 --schedule 5m: (1.4004179s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-520200 -n scheduled-stop-520200
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-520200 -n scheduled-stop-520200: (1.3951514s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-520200 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-520200 -- sudo systemctl show minikube-scheduled-stop --no-page: (1.2216071s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-520200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-520200 --schedule 5s: (1.4484706s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-520200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-520200: exit status 7 (436.406ms)

                                                
                                                
-- stdout --
	scheduled-stop-520200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:45:17.936138    9156 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-520200 -n scheduled-stop-520200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-520200 -n scheduled-stop-520200: exit status 7 (433.375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:45:18.379080    1984 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-520200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-520200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-520200: (4.352283s)
--- PASS: TestScheduledStopWindows (141.58s)

                                                
                                    
x
+
TestInsufficientStorage (54.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-915700 --memory=2048 --output=json --wait=true --driver=docker
E0717 01:46:01.463845    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-915700 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (47.6554644s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3cc98595-9057-4a10-9652-e2b706fd33fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-915700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a0896db-1ef1-46c5-bcfa-ade2267988b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"623ada2a-1884-4e04-8ff0-615cfe5b063d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cba35386-1aa5-477a-87bd-3fafa9bb4801","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"baffc286-b9f1-4352-a206-99cd6657dc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19264"}}
	{"specversion":"1.0","id":"bd21adc8-cadd-4ffb-9b79-8e85c1012c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d2b3661-8906-4a29-9874-04a755b3d5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e61cdb71-f711-4ed2-b067-5115bd77d6c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"971f676b-b17e-4f31-a545-229cc5d6054f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f70f67f4-79f2-4fc8-97a1-bf08575c794b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"1b1b12d4-17a2-415e-ad54-f4bf4aa68533","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-915700\" primary control-plane node in \"insufficient-storage-915700\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1683a21b-e4d3-4e8d-bdf4-9d76718318c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721146479-19264 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6682d7fc-be56-4c12-8f38-d8d91713289b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e2850fe-eccb-4f16-a6c9-ec5f072455cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:45:23.180978    1896 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-915700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-915700 --output=json --layout=cluster: exit status 7 (1.2711631s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-915700","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-915700","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:46:10.833073    8756 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0717 01:46:11.937047    8756 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-915700" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-915700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-915700 --output=json --layout=cluster: exit status 7 (1.2485317s)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-915700","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-915700","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:46:12.110180    2080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0717 01:46:13.187332    2080 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-915700" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	E0717 01:46:13.225422    2080 status.go:560] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\insufficient-storage-915700\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-915700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-915700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-915700: (4.5534739s)
--- PASS: TestInsufficientStorage (54.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (423.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.446555945.exe start -p running-upgrade-299500 --memory=2200 --vm-driver=docker
E0717 01:46:36.302431    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 01:46:53.104977    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
E0717 01:47:24.679403    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.446555945.exe start -p running-upgrade-299500 --memory=2200 --vm-driver=docker: (4m40.4339451s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-299500 --memory=2200 --alsologtostderr -v=1 --driver=docker
E0717 01:51:01.461536    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-299500 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m13.468433s)
helpers_test.go:175: Cleaning up "running-upgrade-299500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-299500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-299500: (8.0330489s)
--- PASS: TestRunningBinaryUpgrade (423.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (658.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
E0717 01:51:53.117093    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (3m30.0013213s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-658500
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-658500: (5.2483453s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-658500 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-658500 status --format={{.Host}}: exit status 7 (523.9928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:54:53.889227    4812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker: (6m10.3027678s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-658500 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (489.4434ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-658500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:01:05.070819   13404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-658500
	    minikube start -p kubernetes-upgrade-658500 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6585002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-658500 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-658500 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker: (1m2.0705081s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-658500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-658500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-658500: (9.4450453s)
--- PASS: TestKubernetesUpgrade (658.40s)

                                                
                                    
x
+
TestMissingContainerUpgrade (513.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3458965900.exe start -p missing-upgrade-222100 --memory=2200 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.3458965900.exe start -p missing-upgrade-222100 --memory=2200 --driver=docker: (4m23.7414665s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-222100
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-222100: (12.705757s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-222100
version_upgrade_test.go:323: (dbg) Done: docker rm missing-upgrade-222100: (2.8614215s)
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-222100 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-222100 --memory=2200 --alsologtostderr -v=1 --driver=docker: (3m46.1947284s)
helpers_test.go:175: Cleaning up "missing-upgrade-222100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-222100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-222100: (7.1703662s)
--- PASS: TestMissingContainerUpgrade (513.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (342.0559ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-222100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19264
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:46:17.929393    2576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (145.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --driver=docker: (2m24.4051032s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-222100 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-222100 status -o json: (1.4805416s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (145.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (66.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --driver=docker: (57.4079523s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-222100 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-222100 status -o json: exit status 2 (1.5365491s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-222100","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:49:41.594150    8324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-222100
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-222100: (7.6925758s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (66.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (33.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --no-kubernetes --driver=docker: (33.0900035s)
--- PASS: TestNoKubernetes/serial/Start (33.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (1.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-222100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-222100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.6671025s)

                                                
                                                
** stderr ** 
	W0717 01:50:23.905093    4320 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (1.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (22.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (16.6931777s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (5.757625s)
--- PASS: TestNoKubernetes/serial/ProfileList (22.45s)

                                                
                                    
x
+
TestPause/serial/Start (168.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-653200 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-653200 --memory=2048 --install-addons=false --wait=all --driver=docker: (2m48.1869748s)
--- PASS: TestPause/serial/Start (168.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (4.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-222100
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-222100: (4.3374919s)
--- PASS: TestNoKubernetes/serial/Stop (4.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-222100 --driver=docker: (19.3404273s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-222100 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-222100 "sudo systemctl is-active --quiet service kubelet": exit status 1 (1.3776927s)

                                                
                                                
** stderr ** 
	W0717 01:51:11.690524    8588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (1.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (86.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-653200 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-653200 --alsologtostderr -v=1 --driver=docker: (1m26.7623065s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (86.78s)

                                                
                                    
x
+
TestPause/serial/Pause (2.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-653200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-653200 --alsologtostderr -v=5: (2.1005285s)
--- PASS: TestPause/serial/Pause (2.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (1.72s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-653200 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-653200 --output=json --layout=cluster: exit status 2 (1.7225931s)

                                                
                                                
-- stdout --
	{"Name":"pause-653200","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-653200","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 01:54:45.340634    3364 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (1.72s)

                                                
                                    
x
+
TestPause/serial/Unpause (2.18s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-653200 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-653200 --alsologtostderr -v=5: (2.1813131s)
--- PASS: TestPause/serial/Unpause (2.18s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.35s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-653200 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-653200 --alsologtostderr -v=5: (2.3514067s)
--- PASS: TestPause/serial/PauseAgain (2.35s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (6.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-653200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-653200 --alsologtostderr -v=5: (6.7857228s)
--- PASS: TestPause/serial/DeletePaused (6.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.7799768s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-653200
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-653200: exit status 1 (174.9802ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-653200: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (212.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1559248801.exe start -p stopped-upgrade-242300 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1559248801.exe start -p stopped-upgrade-242300 --memory=2200 --vm-driver=docker: (1m32.3468496s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1559248801.exe -p stopped-upgrade-242300 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1559248801.exe -p stopped-upgrade-242300 stop: (13.7049522s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-242300 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-242300 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m46.7489516s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (212.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (297.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-556100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-556100 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (4m57.3255628s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (297.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (5.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-242300
E0717 02:01:01.474090    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-242300: (5.6347258s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (5.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (144.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-395100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-395100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.2: (2m24.3339661s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (144.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (160.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-096400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0
E0717 02:01:53.109666    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-096400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0: (2m40.5031287s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (160.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (139.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-970500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.2
E0717 02:03:16.313304    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-970500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.2: (2m19.7858733s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (139.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-395100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6f7e294f-a0ad-4d31-b968-69780939dca7] Pending
helpers_test.go:344: "busybox" [6f7e294f-a0ad-4d31-b968-69780939dca7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6f7e294f-a0ad-4d31-b968-69780939dca7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.0219007s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-395100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-395100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 02:04:04.701660    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-395100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.764921s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-395100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-395100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-395100 --alsologtostderr -v=3: (13.347932s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-096400 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c06460f-e0a1-4402-a8d3-dca6de4277e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c06460f-e0a1-4402-a8d3-dca6de4277e3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.0225677s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-096400 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-556100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f18a929-2e10-4625-976f-36010555ffe9] Pending
helpers_test.go:344: "busybox" [0f18a929-2e10-4625-976f-36010555ffe9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f18a929-2e10-4625-976f-36010555ffe9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0252335s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-556100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-395100 -n embed-certs-395100: exit status 7 (482.674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:04:20.685975    8684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-395100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-096400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-096400 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.6895739s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-096400 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-395100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-395100 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.30.2: (5m2.3787601s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-395100 -n embed-certs-395100: (1.5899767s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-556100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-556100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.027244s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-556100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (14.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-096400 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-096400 --alsologtostderr -v=3: (14.9464878s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (14.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-556100 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-556100 --alsologtostderr -v=3: (14.04083s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-970500 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c473e2ff-246a-444d-86f3-3d7658b400b9] Pending
helpers_test.go:344: "busybox" [c473e2ff-246a-444d-86f3-3d7658b400b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c473e2ff-246a-444d-86f3-3d7658b400b9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.0095191s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-970500 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-096400 -n no-preload-096400: exit status 7 (490.5603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:04:39.385802   10104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-096400 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (1.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-556100 -n old-k8s-version-556100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-556100 -n old-k8s-version-556100: exit status 7 (509.3325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:04:40.504150    3080 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-556100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (1.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (320.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-096400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-096400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.0-beta.0: (5m18.4641305s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-096400 -n no-preload-096400: (1.8427083s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (320.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-970500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-970500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.0926604s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-970500 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (14.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-970500 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-970500 --alsologtostderr -v=3: (14.4349486s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (14.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: exit status 7 (677.7495ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:05:06.798710   14672 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-970500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-970500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.074554s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (1.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-970500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.2
E0717 02:06:01.468566    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
E0717 02:06:53.120771    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-970500 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.30.2: (5m32.7437858s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: (1.4762461s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zlp2g" [4912f0ec-a344-4514-a9bb-1aba173d2218] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0220707s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-zlp2g" [4912f0ec-a344-4514-a9bb-1aba173d2218] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0224663s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-395100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-395100 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-395100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-395100 --alsologtostderr -v=1: (1.9339589s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-395100 -n embed-certs-395100: exit status 2 (1.4851243s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:09:40.227889    6828 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-395100 -n embed-certs-395100: exit status 2 (1.570901s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:09:41.709788    2136 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-395100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-395100 --alsologtostderr -v=1: (2.0981905s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-395100 -n embed-certs-395100: (1.8467311s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-395100 -n embed-certs-395100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-395100 -n embed-certs-395100: (1.9599136s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (10.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (139.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-861300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-861300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0: (2m19.8199823s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (139.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-h2hqq" [1db4fb5c-c5b1-4729-8ca1-8c76c24de58c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0155184s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-h2hqq" [1db4fb5c-c5b1-4729-8ca1-8c76c24de58c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0155338s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-096400 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-096400 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-096400 image list --format=json: (1.0579445s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (11.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-096400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-096400 --alsologtostderr -v=1: (2.1103086s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-096400 -n no-preload-096400: exit status 2 (1.5707793s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:10:15.661046    5608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-096400 -n no-preload-096400: exit status 2 (1.5349108s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:10:17.214118   15140 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-096400 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-096400 --alsologtostderr -v=1: (2.239284s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-096400 -n no-preload-096400: (1.9977381s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-096400 -n no-preload-096400
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-096400 -n no-preload-096400: (1.856625s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (11.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rwfk6" [c9ba7589-efaf-46c1-be9b-29cdf63eb347] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.021439s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rwfk6" [c9ba7589-efaf-46c1-be9b-29cdf63eb347] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0213776s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-970500 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (153.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (2m33.1303279s)
--- PASS: TestNetworkPlugins/group/auto/Start (153.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-970500 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p default-k8s-diff-port-970500 image list --format=json: (1.1661912s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (12.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-970500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-970500 --alsologtostderr -v=1: (2.2506295s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: exit status 2 (1.6782662s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:10:59.796269   11372 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
E0717 02:11:01.480122    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: exit status 2 (1.8685046s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:11:01.503381    7108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-970500 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-970500 --alsologtostderr -v=1: (2.2060282s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: (3.0631221s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-970500 -n default-k8s-diff-port-970500: (1.7198697s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (12.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (160.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
E0717 02:11:53.123468    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (2m40.6412268s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (160.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4c4d4" [ca2146d0-5008-4b1b-a2a6-c9f6daa9962d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0211274s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4c4d4" [ca2146d0-5008-4b1b-a2a6-c9f6daa9962d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0264906s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-556100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-861300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-861300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.833402s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-556100 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-556100 image list --format=json: (1.1946757s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-556100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-556100 --alsologtostderr -v=1: (2.4962894s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100: exit status 2 (2.0372164s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:12:23.878508   15340 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-556100 -n old-k8s-version-556100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-556100 -n old-k8s-version-556100: exit status 2 (1.8770065s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:12:25.911475    8432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-556100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-556100 --alsologtostderr -v=1: (2.1733926s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-556100 -n old-k8s-version-556100: (2.6552254s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-556100 -n old-k8s-version-556100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-556100 -n old-k8s-version-556100: (2.0250334s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (13.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-861300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-861300 --alsologtostderr -v=3: (8.9368749s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-861300 -n newest-cni-861300: exit status 7 (663.3248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:12:31.476679    5528 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-861300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (1.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (60.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-861300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-861300 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.0-beta.0: (57.8410027s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-861300 -n newest-cni-861300: (2.2607971s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (60.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (203.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (3m23.8595944s)
--- PASS: TestNetworkPlugins/group/calico/Start (203.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-901900 "pgrep -a kubelet": (1.5565622s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (23.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context auto-901900 replace --force -f testdata\netcat-deployment.yaml: (1.2238937s)
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-x6svw" [497621cf-e937-4892-8dcf-79ca3f6d0162] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-x6svw" [497621cf-e937-4892-8dcf-79ca3f6d0162] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 22.0085292s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (23.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-861300 image list --format=json
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-861300 image list --format=json: (1.1496938s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (1.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (11.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-861300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-861300 --alsologtostderr -v=1: (2.3087539s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-861300 -n newest-cni-861300: exit status 2 (1.7194081s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:13:36.624746   14092 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-861300 -n newest-cni-861300: exit status 2 (1.8407248s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0717 02:13:38.357062    3832 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-861300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-861300 --alsologtostderr -v=1: (2.1986885s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-861300 -n newest-cni-861300: (2.286914s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-861300 -n newest-cni-861300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-861300 -n newest-cni-861300: (1.4898071s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (11.85s)
E0717 02:20:32.267445    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kindnet-901900\client.crt: The system cannot find the path specified.
E0717 02:20:44.712161    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (181.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (3m1.2688753s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (181.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w4wzh" [ce0302cf-28f5-4eda-a1a1-73780996e2f0] Running
E0717 02:14:11.860276    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:11.876276    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:11.892275    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:11.922927    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:11.970165    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:12.057195    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:12.218947    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:12.469285    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.484693    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.500711    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.532678    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.549208    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:12.580697    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.675229    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:12.848435    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:13.179266    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:13.195208    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:13.827031    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:14.489275    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:15.117615    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0167178s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (1.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-901900 "pgrep -a kubelet"
E0717 02:14:17.051887    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-901900 "pgrep -a kubelet": (1.4170892s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (1.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (41.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-901900 replace --force -f testdata\netcat-deployment.yaml
E0717 02:14:17.691823    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gmdwf" [4d391da6-e338-4424-987a-34e2ca438cfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 02:14:22.176378    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:22.822776    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:32.429503    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:14:33.077452    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:37.319232    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.333939    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.349001    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.380196    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.425809    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.518002    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:37.688605    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:38.015742    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:38.668187    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:39.948587    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:42.509791    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:14:47.637804    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-gmdwf" [4d391da6-e338-4424-987a-34e2ca438cfd] Running
E0717 02:14:53.562192    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:14:57.892635    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 41.0152456s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (41.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (1.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
net_test.go:194: (dbg) Done: kubectl --context kindnet-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (1.1337353s)
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (1.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (115.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
E0717 02:15:18.379657    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:15:34.658367    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:15:34.658367    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
E0717 02:15:59.346248    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\default-k8s-diff-port-970500\client.crt: The system cannot find the path specified.
E0717 02:16:01.478498    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m55.6656469s)
--- PASS: TestNetworkPlugins/group/false/Start (115.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gmhm4" [f16dfc37-b873-4c2b-8ddc-4336ad71b862] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0365916s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-901900 "pgrep -a kubelet": (1.5448653s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (128.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (2m8.7953166s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (128.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (40.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ktwwd" [62a49848-a40e-4225-b897-7cc554c396a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ktwwd" [62a49848-a40e-4225-b897-7cc554c396a9] Running
E0717 02:16:53.130864    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\functional-965000\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 40.0167828s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (40.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-901900 "pgrep -a kubelet"
E0717 02:16:56.581603    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\no-preload-096400\client.crt: The system cannot find the path specified.
E0717 02:16:56.596769    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\old-k8s-version-556100\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-901900 "pgrep -a kubelet": (1.3311808s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (1.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (18.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4xtw8" [2f94a5bc-ef56-462e-9d55-bf675cc87bfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-4xtw8" [2f94a5bc-ef56-462e-9d55-bf675cc87bfd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 18.0212456s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (18.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (1.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-901900 "pgrep -a kubelet": (1.4825019s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (20.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rmqfl" [f6f3e00e-875a-46ec-a0d6-108322229ba8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rmqfl" [f6f3e00e-875a-46ec-a0d6-108322229ba8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 20.0263165s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (20.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (159.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m39.6485571s)
--- PASS: TestNetworkPlugins/group/flannel/Start (159.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-901900 "pgrep -a kubelet"
E0717 02:18:25.726337    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:25.742316    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:25.757312    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:25.789845    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:25.836896    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:25.931434    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:26.093902    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-901900 "pgrep -a kubelet": (1.4819134s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (42.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-901900 replace --force -f testdata\netcat-deployment.yaml
E0717 02:18:26.423940    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:27.068438    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:28.359460    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-901900 replace --force -f testdata\netcat-deployment.yaml: (4.6435337s)
E0717 02:18:30.926974    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-85w47" [7bc193c8-1806-4e51-9593-9a3b90bae20a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 02:18:36.048086    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
E0717 02:18:46.289033    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-85w47" [7bc193c8-1806-4e51-9593-9a3b90bae20a] Running
E0717 02:19:06.777162    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\auto-901900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 36.1040062s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (42.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (117.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m57.4050338s)
--- PASS: TestNetworkPlugins/group/bridge/Start (117.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (144.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-901900 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (2m24.3694236s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (144.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (1.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-901900 "pgrep -a kubelet": (1.2752845s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (17.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-99x9g" [38af1609-3176-4caf-bccf-e50f12a0f513] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-99x9g" [38af1609-3176-4caf-bccf-e50f12a0f513] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 17.0129751s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (17.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6dzvp" [477ced0e-6dad-42b3-b316-7bffe6ef2ea8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0211284s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (1.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-901900 "pgrep -a kubelet": (1.4505965s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (1.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (18.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7r9zt" [f67f0397-6ae8-4547-b942-55af577771e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 02:21:01.485952    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\addons-285600\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-7r9zt" [f67f0397-6ae8-4547-b942-55af577771e0] Running
E0717 02:21:12.277084    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-901900\client.crt: The system cannot find the path specified.
E0717 02:21:14.851644    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-901900\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 18.0252103s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (18.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (1.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-901900 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-901900 "pgrep -a kubelet": (1.3406213s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (19.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-901900 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4vg9p" [fceaa9e8-30ce-412c-947c-568edcb3e23b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 02:21:19.981442    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-901900\client.crt: The system cannot find the path specified.
E0717 02:21:30.227909    7712 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\calico-901900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-6bc787d567-4vg9p" [fceaa9e8-30ce-412c-947c-568edcb3e23b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 19.0089624s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (19.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-901900 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-901900 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                    

Test skip (26/348)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (28.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 62.3531ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cmvrv" [bde7187f-101b-47a5-8f05-14625aa13089] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.02803s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zgk9n" [0cd181eb-4574-4249-b6f0-a750246d67bc] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0410466s
addons_test.go:342: (dbg) Run:  kubectl --context addons-285600 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-285600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-285600 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (17.0760047s)
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (28.46s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-965000 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-965000 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 13864: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-965000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-965000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-d2q8g" [8c8a4708-046c-4681-8294-fc8292d65a7f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-d2q8g" [8c8a4708-046c-4681-8294-fc8292d65a7f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0133294s
functional_test.go:1642: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.87s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-546300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-546300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-546300: (1.7962765s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (19.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-901900 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
W0717 01:54:53.469717    4652 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
W0717 01:54:53.777211    3644 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
W0717 01:54:54.104196    4596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> host: crictl pods:
W0717 01:54:54.575139   14972 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
W0717 01:54:54.867131    6480 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
W0717 01:54:56.519429    7700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: ip a s:
W0717 01:54:56.832816   14292 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: ip r s:
W0717 01:54:57.150100    2604 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
W0717 01:54:57.435825    2576 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
W0717 01:54:57.736495    1544 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-901900" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
W0717 01:55:00.019096    4256 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
W0717 01:55:00.727009    6812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
W0717 01:55:01.341277    8516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
W0717 01:55:01.727155    6216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
W0717 01:55:02.046330    2748 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-901900

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
W0717 01:55:02.713173   10764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
W0717 01:55:03.024722    8264 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
W0717 01:55:03.345048    8556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: docker system info:
W0717 01:55:03.641307    9568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
W0717 01:55:03.979275    9436 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
W0717 01:55:04.297955   14288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
W0717 01:55:04.601579    3344 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
W0717 01:55:04.905715    9392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
W0717 01:55:05.229860    9408 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
W0717 01:55:05.541492    6284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
W0717 01:55:05.861137    3348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
W0717 01:55:06.182444    3900 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
W0717 01:55:06.503695    3996 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
W0717 01:55:06.819157    8240 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
W0717 01:55:07.348821    3812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
W0717 01:55:07.859741    5508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
W0717 01:55:08.335182   14500 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                

                                                
                                                
>>> host: crio config:
W0717 01:55:08.770830    8152 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube3\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
* Profile "cilium-901900" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-901900"

                                                
                                                
----------------------- debugLogs end: cilium-901900 [took: 17.5407559s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-901900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-901900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-901900: (1.4779367s)
--- SKIP: TestNetworkPlugins/group/cilium (19.02s)

                                                
                                    
Copied to clipboard