Test Report: Docker_Windows 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (4/339)

Order failed test Duration
33 TestAddons/parallel/Registry 78.67
55 TestErrorSpam/setup 61.99
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 5.32
366 TestStartStop/group/old-k8s-version/serial/SecondStart 410.37
x
+
TestAddons/parallel/Registry (78.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 7.7728ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-974hj" [80d9e439-0bf0-4a73-89d3-b97a73cfe368] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0546376s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lgf7x" [fc398532-2b4f-4f30-bd23-308f1c818be5] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0081986s
addons_test.go:338: (dbg) Run:  kubectl --context addons-205800 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-205800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-205800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.2276244s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-205800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:353: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-205800
helpers_test.go:235: (dbg) docker inspect addons-205800:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d",
	        "Created": "2024-09-23T10:23:09.041810006Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1657,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:23:09.377502307Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d/hostname",
	        "HostsPath": "/var/lib/docker/containers/3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d/hosts",
	        "LogPath": "/var/lib/docker/containers/3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d/3300bdfc2462a979790e9ea9042747811c3229eea87e30f8fa6e05beaa41159d-json.log",
	        "Name": "/addons-205800",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-205800:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-205800",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fef95318f49bc206f96724fef3a08addd4300e4e4121467a007f5be6f763bec9-init/diff:/var/lib/docker/overlay2/45a1d176e43ae6a4b4b413b83d6ac02867e558bd9182f31de6a362b3112ed40d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fef95318f49bc206f96724fef3a08addd4300e4e4121467a007f5be6f763bec9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fef95318f49bc206f96724fef3a08addd4300e4e4121467a007f5be6f763bec9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fef95318f49bc206f96724fef3a08addd4300e4e4121467a007f5be6f763bec9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-205800",
	                "Source": "/var/lib/docker/volumes/addons-205800/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-205800",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-205800",
	                "name.minikube.sigs.k8s.io": "addons-205800",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edfb3ae3c4194dfdd4cf24ee8c5359684084cfcdb7496c6c703f4f9c19c834b8",
	            "SandboxKey": "/var/run/docker/netns/edfb3ae3c419",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56907"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56903"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56904"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56905"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56906"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-205800": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5be8028a97e428f5c1ec383079c1d3f0c35f296ac74ddfa37b95efb0c7793883",
	                    "EndpointID": "9827607b4aa49a0a18a97ccfca9f0e13f12496c457ffdf39ba3fca4a8dd87c6f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-205800",
	                        "3300bdfc2462"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-205800 -n addons-205800
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 logs -n 25: (2.6566196s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-447300                                                                     | download-only-447300   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-329100   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-329100                                                                     |                        |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-329100                                                                     | download-only-329100   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-447300                                                                     | download-only-447300   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-329100                                                                     | download-only-329100   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-559400 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | download-docker-559400                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p download-docker-559400                                                                   | download-docker-559400 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-982700   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-982700                                                                        |                        |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |                   |         |                     |                     |
	|         | http://127.0.0.1:56883                                                                      |                        |                   |         |                     |                     |
	|         | --driver=docker                                                                             |                        |                   |         |                     |                     |
	| delete  | -p binary-mirror-982700                                                                     | binary-mirror-982700   | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-205800                                                                               |                        |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-205800                                                                               |                        |                   |         |                     |                     |
	| start   | -p addons-205800 --wait=true                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:29 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                        |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |                   |         |                     |                     |
	|         | --driver=docker --addons=ingress                                                            |                        |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-205800 addons disable                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:30 UTC | 23 Sep 24 10:30 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:38 UTC | 23 Sep 24 10:39 UTC |
	|         | -p addons-205800                                                                            |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | -p addons-205800                                                                            |                        |                   |         |                     |                     |
	| addons  | addons-205800 addons disable                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |                   |         |                     |                     |
	| ssh     | addons-205800 ssh cat                                                                       | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | /opt/local-path-provisioner/pvc-ba41e5d6-ad17-4871-8b82-be93f5551393_default_test-pvc/file1 |                        |                   |         |                     |                     |
	| addons  | addons-205800 addons disable                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:40 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | addons-205800 addons disable                                                                | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |                   |         |                     |                     |
	|         | -v=1                                                                                        |                        |                   |         |                     |                     |
	| addons  | addons-205800 addons                                                                        | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | disable metrics-server                                                                      |                        |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | addons-205800                                                                               |                        |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:39 UTC | 23 Sep 24 10:39 UTC |
	|         | addons-205800                                                                               |                        |                   |         |                     |                     |
	| ssh     | addons-205800 ssh curl -s                                                                   | addons-205800          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:40 UTC | 23 Sep 24 10:40 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:19
	Running on machine: minikube4
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:19.817670    2156 out.go:345] Setting OutFile to fd 1012 ...
	I0923 10:21:19.890822    2156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:19.890822    2156 out.go:358] Setting ErrFile to fd 1008...
	I0923 10:21:19.890822    2156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:19.910515    2156 out.go:352] Setting JSON to false
	I0923 10:21:19.912978    2156 start.go:129] hostinfo: {"hostname":"minikube4","uptime":47443,"bootTime":1727039436,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:21:19.912978    2156 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:21:19.917384    2156 out.go:177] * [addons-205800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:21:19.919750    2156 notify.go:220] Checking for updates...
	I0923 10:21:19.919969    2156 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:21:19.923557    2156 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:19.926099    2156 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:21:19.927930    2156 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:19.931123    2156 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:19.933473    2156 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:20.121058    2156 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:21:20.132041    2156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:20.428723    2156 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:21:20.40395391 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:21:20.431675    2156 out.go:177] * Using the docker driver based on user configuration
	I0923 10:21:20.435234    2156 start.go:297] selected driver: docker
	I0923 10:21:20.435234    2156 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:20.435326    2156 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:20.499074    2156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:20.803061    2156 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:21:20.777077086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:21:20.803537    2156 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:20.804721    2156 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:20.807642    2156 out.go:177] * Using Docker Desktop driver with root privileges
	I0923 10:21:20.810119    2156 cni.go:84] Creating CNI manager for ""
	I0923 10:21:20.810208    2156 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:20.810208    2156 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:20.810375    2156 start.go:340] cluster config:
	{Name:addons-205800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:20.812671    2156 out.go:177] * Starting "addons-205800" primary control-plane node in "addons-205800" cluster
	I0923 10:21:20.815644    2156 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:21:20.819607    2156 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:20.821742    2156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:20.821800    2156 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:20.821961    2156 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 10:21:20.822019    2156 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:20.822392    2156 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:20.822392    2156 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 10:21:20.823140    2156 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\config.json ...
	I0923 10:21:20.823140    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\config.json: {Name:mk4539e9ccaf1d08b8773b6013682b3e97519116 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:20.901197    2156 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:20.901197    2156 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:21:20.901197    2156 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:21:20.901197    2156 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:20.901197    2156 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:20.901197    2156 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:20.902284    2156 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:20.902284    2156 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:21:20.902284    2156 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:22:32.171604    2156 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:22:32.171604    2156 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:22:32.171999    2156 start.go:360] acquireMachinesLock for addons-205800: {Name:mka2a983952a6d940335283e9d387bf0f6526f45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:22:32.172284    2156 start.go:364] duration metric: took 285.1µs to acquireMachinesLock for "addons-205800"
	I0923 10:22:32.172663    2156 start.go:93] Provisioning new machine with config: &{Name:addons-205800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205800 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:22:32.172865    2156 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:22:32.176426    2156 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:22:32.176862    2156 start.go:159] libmachine.API.Create for "addons-205800" (driver="docker")
	I0923 10:22:32.176980    2156 client.go:168] LocalClient.Create starting
	I0923 10:22:32.177973    2156 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0923 10:22:32.336423    2156 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0923 10:22:32.540780    2156 cli_runner.go:164] Run: docker network inspect addons-205800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:22:32.610844    2156 cli_runner.go:211] docker network inspect addons-205800 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:22:32.625347    2156 network_create.go:284] running [docker network inspect addons-205800] to gather additional debugging logs...
	I0923 10:22:32.625347    2156 cli_runner.go:164] Run: docker network inspect addons-205800
	W0923 10:22:32.696958    2156 cli_runner.go:211] docker network inspect addons-205800 returned with exit code 1
	I0923 10:22:32.697064    2156 network_create.go:287] error running [docker network inspect addons-205800]: docker network inspect addons-205800: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-205800 not found
	I0923 10:22:32.697064    2156 network_create.go:289] output of [docker network inspect addons-205800]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-205800 not found
	
	** /stderr **
	I0923 10:22:32.707242    2156 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:22:32.800568    2156 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001301d10}
	I0923 10:22:32.800702    2156 network_create.go:124] attempt to create docker network addons-205800 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:22:32.810557    2156 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-205800 addons-205800
	I0923 10:22:33.010742    2156 network_create.go:108] docker network addons-205800 192.168.49.0/24 created
	I0923 10:22:33.011386    2156 kic.go:121] calculated static IP "192.168.49.2" for the "addons-205800" container
	I0923 10:22:33.036379    2156 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:22:33.122734    2156 cli_runner.go:164] Run: docker volume create addons-205800 --label name.minikube.sigs.k8s.io=addons-205800 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:22:33.204399    2156 oci.go:103] Successfully created a docker volume addons-205800
	I0923 10:22:33.214798    2156 cli_runner.go:164] Run: docker run --rm --name addons-205800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205800 --entrypoint /usr/bin/test -v addons-205800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:22:45.672927    2156 cli_runner.go:217] Completed: docker run --rm --name addons-205800-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205800 --entrypoint /usr/bin/test -v addons-205800:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (12.4574185s)
	I0923 10:22:45.672986    2156 oci.go:107] Successfully prepared a docker volume addons-205800
	I0923 10:22:45.673031    2156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:22:45.673031    2156 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:22:45.681820    2156 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:23:08.319812    2156 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-205800:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (22.6369214s)
	I0923 10:23:08.319812    2156 kic.go:203] duration metric: took 22.6457107s to extract preloaded images to volume ...
	I0923 10:23:08.330130    2156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:23:08.630235    2156 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-23 10:23:08.603977665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:23:08.641320    2156 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:23:08.972650    2156 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-205800 --name addons-205800 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-205800 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-205800 --network addons-205800 --ip 192.168.49.2 --volume addons-205800:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:23:09.769997    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Running}}
	I0923 10:23:09.857687    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:09.944508    2156 cli_runner.go:164] Run: docker exec addons-205800 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:23:10.093506    2156 oci.go:144] the created container "addons-205800" has a running status.
	I0923 10:23:10.093506    2156 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa...
	I0923 10:23:10.346280    2156 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:23:10.476485    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:10.579315    2156 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:23:10.579315    2156 kic_runner.go:114] Args: [docker exec --privileged addons-205800 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:23:10.752684    2156 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa...
	I0923 10:23:13.424820    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:13.499466    2156 machine.go:93] provisionDockerMachine start ...
	I0923 10:23:13.509340    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:13.585820    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:13.595435    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:13.595435    2156 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:23:13.777126    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205800
	
	I0923 10:23:13.777126    2156 ubuntu.go:169] provisioning hostname "addons-205800"
	I0923 10:23:13.791059    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:13.874516    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:13.874975    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:13.874975    2156 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-205800 && echo "addons-205800" | sudo tee /etc/hostname
	I0923 10:23:14.088406    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-205800
	
	I0923 10:23:14.097957    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:14.176698    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:14.177305    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:14.177305    2156 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-205800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-205800/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-205800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:23:14.370226    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:23:14.370226    2156 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0923 10:23:14.370226    2156 ubuntu.go:177] setting up certificates
	I0923 10:23:14.370226    2156 provision.go:84] configureAuth start
	I0923 10:23:14.380223    2156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205800
	I0923 10:23:14.450253    2156 provision.go:143] copyHostCerts
	I0923 10:23:14.450253    2156 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0923 10:23:14.452698    2156 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 10:23:14.453964    2156 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 10:23:14.455048    2156 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-205800 san=[127.0.0.1 192.168.49.2 addons-205800 localhost minikube]
	I0923 10:23:14.600775    2156 provision.go:177] copyRemoteCerts
	I0923 10:23:14.611676    2156 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:23:14.621575    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:14.701673    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:14.845434    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:23:14.895274    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:23:14.942572    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:23:14.986606    2156 provision.go:87] duration metric: took 615.3514ms to configureAuth
	I0923 10:23:14.986606    2156 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:23:14.987163    2156 config.go:182] Loaded profile config "addons-205800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:23:14.996465    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:15.080560    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:15.081127    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:15.081175    2156 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 10:23:15.267461    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 10:23:15.267579    2156 ubuntu.go:71] root file system type: overlay
	I0923 10:23:15.267847    2156 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 10:23:15.275970    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:15.352394    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:15.352860    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:15.352966    2156 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 10:23:15.562712    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 10:23:15.573019    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:15.662310    2156 main.go:141] libmachine: Using SSH client type: native
	I0923 10:23:15.662770    2156 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 56907 <nil> <nil>}
	I0923 10:23:15.662770    2156 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 10:23:17.086433    2156 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 10:23:15.549059077 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 10:23:17.086521    2156 machine.go:96] duration metric: took 3.5868262s to provisionDockerMachine
	I0923 10:23:17.086521    2156 client.go:171] duration metric: took 44.9074159s to LocalClient.Create
	I0923 10:23:17.086608    2156 start.go:167] duration metric: took 44.9076217s to libmachine.API.Create "addons-205800"
	I0923 10:23:17.086735    2156 start.go:293] postStartSetup for "addons-205800" (driver="docker")
	I0923 10:23:17.086814    2156 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:23:17.099277    2156 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:23:17.109779    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:17.195865    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:17.347573    2156 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:23:17.357736    2156 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:23:17.357736    2156 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:23:17.357736    2156 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:23:17.357736    2156 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:23:17.357736    2156 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0923 10:23:17.358443    2156 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0923 10:23:17.358443    2156 start.go:296] duration metric: took 271.6395ms for postStartSetup
	I0923 10:23:17.370876    2156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205800
	I0923 10:23:17.442288    2156 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\config.json ...
	I0923 10:23:17.459405    2156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:23:17.467397    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:17.542692    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:17.681617    2156 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:23:17.693790    2156 start.go:128] duration metric: took 45.5186981s to createHost
	I0923 10:23:17.693790    2156 start.go:83] releasing machines lock for "addons-205800", held for 45.5192748s
	I0923 10:23:17.704455    2156 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-205800
	I0923 10:23:17.779413    2156 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 10:23:17.788426    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:17.789410    2156 ssh_runner.go:195] Run: cat /version.json
	I0923 10:23:17.797410    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:17.853522    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:17.866305    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	W0923 10:23:17.976789    2156 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 10:23:18.005291    2156 ssh_runner.go:195] Run: systemctl --version
	I0923 10:23:18.040253    2156 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:23:18.072438    2156 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0923 10:23:18.093490    2156 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 10:23:18.108487    2156 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 10:23:18.142928    2156 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 10:23:18.142928    2156 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 10:23:18.174149    2156 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 10:23:18.174149    2156 start.go:495] detecting cgroup driver to use...
	I0923 10:23:18.174261    2156 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:23:18.174634    2156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:23:18.220398    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:23:18.254472    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:23:18.280785    2156 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:23:18.296717    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:23:18.340297    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:23:18.376464    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:23:18.410324    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:23:18.446509    2156 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:23:18.484392    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:23:18.519140    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:23:18.553835    2156 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:23:18.588914    2156 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:23:18.619640    2156 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:23:18.650686    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:18.798527    2156 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 10:23:19.005850    2156 start.go:495] detecting cgroup driver to use...
	I0923 10:23:19.006006    2156 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:23:19.019656    2156 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 10:23:19.047162    2156 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 10:23:19.067574    2156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 10:23:19.095858    2156 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:23:19.153614    2156 ssh_runner.go:195] Run: which cri-dockerd
	I0923 10:23:19.177846    2156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:23:19.200134    2156 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:23:19.248580    2156 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 10:23:19.413939    2156 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 10:23:19.585468    2156 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:23:19.585636    2156 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:23:19.629523    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:19.796333    2156 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 10:23:20.489673    2156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:23:20.526830    2156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:23:20.566368    2156 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:23:20.711710    2156 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 10:23:20.875266    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:21.037265    2156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 10:23:21.078050    2156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:23:21.112511    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:21.256065    2156 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 10:23:21.398190    2156 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:23:21.410847    2156 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 10:23:21.421940    2156 start.go:563] Will wait 60s for crictl version
	I0923 10:23:21.434204    2156 ssh_runner.go:195] Run: which crictl
	I0923 10:23:21.455733    2156 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:23:21.523920    2156 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 10:23:21.534017    2156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:23:21.596943    2156 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:23:21.654746    2156 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 10:23:21.667989    2156 cli_runner.go:164] Run: docker exec -t addons-205800 dig +short host.docker.internal
	I0923 10:23:21.856448    2156 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 10:23:21.871149    2156 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 10:23:21.881933    2156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:23:21.912878    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:21.989451    2156 kubeadm.go:883] updating cluster {Name:addons-205800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:23:21.989451    2156 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:23:21.999933    2156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:23:22.044089    2156 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:23:22.044151    2156 docker.go:615] Images already preloaded, skipping extraction
	I0923 10:23:22.053164    2156 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:23:22.092959    2156 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:23:22.092959    2156 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:23:22.092959    2156 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 10:23:22.093627    2156 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-205800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-205800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:23:22.103393    2156 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 10:23:22.189461    2156 cni.go:84] Creating CNI manager for ""
	I0923 10:23:22.189461    2156 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:23:22.190147    2156 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:23:22.190317    2156 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-205800 NodeName:addons-205800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:23:22.190567    2156 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-205800"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:23:22.202775    2156 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:23:22.223478    2156 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:23:22.234923    2156 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:23:22.255358    2156 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 10:23:22.285400    2156 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:23:22.317530    2156 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 10:23:22.362714    2156 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:23:22.373794    2156 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:23:22.409973    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:22.557806    2156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:23:22.586295    2156 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800 for IP: 192.168.49.2
	I0923 10:23:22.586295    2156 certs.go:194] generating shared ca certs ...
	I0923 10:23:22.586295    2156 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:22.586936    2156 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0923 10:23:22.817784    2156 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt ...
	I0923 10:23:22.817784    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt: {Name:mk9bfb57717fe0294c5e7df5f8c64e4a472aed76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:22.818133    2156 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key ...
	I0923 10:23:22.819146    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key: {Name:mkaf629c86634d8c06aa76eb4f156580984c52e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:22.819370    2156 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0923 10:23:23.149935    2156 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0923 10:23:23.149935    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk9c3ccc3549be1fa7f782d8912f5526cfdc9441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.151012    2156 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key ...
	I0923 10:23:23.151012    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkb7ca058748bde1bcd6cbc9fb3319315d2b4a28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.152031    2156 certs.go:256] generating profile certs ...
	I0923 10:23:23.153013    2156 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.key
	I0923 10:23:23.153013    2156 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.crt with IP's: []
	I0923 10:23:23.329214    2156 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.crt ...
	I0923 10:23:23.329214    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.crt: {Name:mkdfc2955ccb78a126b3ffd45b194da2113b8e9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.330140    2156 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.key ...
	I0923 10:23:23.330140    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\client.key: {Name:mk5f8f6845b943c7d95871b83bd5b3ef5698d2df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.331233    2156 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key.894390b4
	I0923 10:23:23.332173    2156 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt.894390b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:23:23.495251    2156 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt.894390b4 ...
	I0923 10:23:23.495251    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt.894390b4: {Name:mk15c124e184acfc7707abf62b5faaa9de074cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.495555    2156 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key.894390b4 ...
	I0923 10:23:23.495555    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key.894390b4: {Name:mk3251ddb8c3f931989fdc1685e7aa03d5710d1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.496688    2156 certs.go:381] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt.894390b4 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt
	I0923 10:23:23.507756    2156 certs.go:385] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key.894390b4 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key
	I0923 10:23:23.508537    2156 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.key
	I0923 10:23:23.508537    2156 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.crt with IP's: []
	I0923 10:23:23.889122    2156 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.crt ...
	I0923 10:23:23.889122    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.crt: {Name:mk1aa5a866a5f074d9f2c2581ac50698e8f9e78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.890102    2156 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.key ...
	I0923 10:23:23.890102    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.key: {Name:mkbaadbfd0db599dead2ba523e541164b2162a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:23.901217    2156 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 10:23:23.902052    2156 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0923 10:23:23.902307    2156 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 10:23:23.902549    2156 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 10:23:23.903802    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:23:23.956619    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:23:24.001977    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:23:24.044169    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:23:24.092418    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:23:24.138373    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:23:24.186076    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:23:24.239182    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\addons-205800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:23:24.285302    2156 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:23:24.331201    2156 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:23:24.378355    2156 ssh_runner.go:195] Run: openssl version
	I0923 10:23:24.403455    2156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:23:24.437070    2156 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:23:24.447691    2156 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:23:24.463858    2156 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:23:24.489797    2156 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:23:24.522435    2156 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:23:24.536261    2156 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:23:24.536261    2156 kubeadm.go:392] StartCluster: {Name:addons-205800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-205800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:23:24.549212    2156 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:23:24.602290    2156 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:23:24.635817    2156 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:23:24.656770    2156 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:23:24.668663    2156 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:23:24.688045    2156 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:23:24.688106    2156 kubeadm.go:157] found existing configuration files:
	
	I0923 10:23:24.699801    2156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:23:24.719135    2156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:23:24.731171    2156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:23:24.762533    2156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:23:24.782859    2156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:23:24.794067    2156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:23:24.827491    2156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:23:24.846315    2156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:23:24.858285    2156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:23:24.894724    2156 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:23:24.914999    2156 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:23:24.929162    2156 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:23:24.951716    2156 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:23:25.020630    2156 kubeadm.go:310] W0923 10:23:25.019555    1984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:23:25.021174    2156 kubeadm.go:310] W0923 10:23:25.020413    1984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:23:25.050672    2156 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0923 10:23:25.167511    2156 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:23:40.646896    2156 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:23:40.647071    2156 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:23:40.647071    2156 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:23:40.647071    2156 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:23:40.647817    2156 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:23:40.650162    2156 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:23:40.652898    2156 out.go:235]   - Generating certificates and keys ...
	I0923 10:23:40.653064    2156 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:23:40.653258    2156 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:23:40.653855    2156 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:23:40.653855    2156 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:23:40.653855    2156 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:23:40.653855    2156 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:23:40.654483    2156 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:23:40.654483    2156 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-205800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:23:40.654483    2156 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:23:40.655013    2156 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-205800 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:23:40.655085    2156 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:23:40.655085    2156 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:23:40.655633    2156 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:23:40.655779    2156 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:23:40.655885    2156 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:23:40.655974    2156 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:23:40.655974    2156 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:23:40.655974    2156 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:23:40.656534    2156 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:23:40.656534    2156 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:23:40.656534    2156 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:23:40.659376    2156 out.go:235]   - Booting up control plane ...
	I0923 10:23:40.659898    2156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:23:40.660009    2156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:23:40.660048    2156 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:23:40.660048    2156 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:23:40.660048    2156 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:23:40.660048    2156 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:23:40.660048    2156 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:23:40.660581    2156 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:23:40.660581    2156 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001755264s
	I0923 10:23:40.660742    2156 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:23:40.660742    2156 kubeadm.go:310] [api-check] The API server is healthy after 9.002495977s
	I0923 10:23:40.660742    2156 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:23:40.660742    2156 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:23:40.660742    2156 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:23:40.661308    2156 kubeadm.go:310] [mark-control-plane] Marking the node addons-205800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:23:40.661456    2156 kubeadm.go:310] [bootstrap-token] Using token: jsw52p.wo9e1v9lwsw8bbeu
	I0923 10:23:40.666803    2156 out.go:235]   - Configuring RBAC rules ...
	I0923 10:23:40.666803    2156 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:23:40.666803    2156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:23:40.666803    2156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:23:40.667844    2156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:23:40.667844    2156 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:23:40.667844    2156 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:23:40.667844    2156 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:23:40.667844    2156 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:23:40.668835    2156 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:23:40.668893    2156 kubeadm.go:310] 
	I0923 10:23:40.669036    2156 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:23:40.669088    2156 kubeadm.go:310] 
	I0923 10:23:40.669276    2156 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:23:40.669331    2156 kubeadm.go:310] 
	I0923 10:23:40.669331    2156 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:23:40.669490    2156 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:23:40.669631    2156 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:23:40.669631    2156 kubeadm.go:310] 
	I0923 10:23:40.669748    2156 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:23:40.669748    2156 kubeadm.go:310] 
	I0923 10:23:40.669922    2156 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:23:40.669922    2156 kubeadm.go:310] 
	I0923 10:23:40.670055    2156 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:23:40.670166    2156 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:23:40.670340    2156 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:23:40.670340    2156 kubeadm.go:310] 
	I0923 10:23:40.670514    2156 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:23:40.670790    2156 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:23:40.670790    2156 kubeadm.go:310] 
	I0923 10:23:40.670969    2156 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jsw52p.wo9e1v9lwsw8bbeu \
	I0923 10:23:40.671189    2156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f6be71f7163a6cfdc7ef789cb7a430d9e03a0ceaa00a90394d719117597a128d \
	I0923 10:23:40.671189    2156 kubeadm.go:310] 	--control-plane 
	I0923 10:23:40.671385    2156 kubeadm.go:310] 
	I0923 10:23:40.671561    2156 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:23:40.671561    2156 kubeadm.go:310] 
	I0923 10:23:40.671849    2156 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jsw52p.wo9e1v9lwsw8bbeu \
	I0923 10:23:40.672054    2156 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f6be71f7163a6cfdc7ef789cb7a430d9e03a0ceaa00a90394d719117597a128d 
	I0923 10:23:40.672054    2156 cni.go:84] Creating CNI manager for ""
	I0923 10:23:40.672054    2156 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:23:40.675534    2156 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 10:23:40.692118    2156 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 10:23:40.744900    2156 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 10:23:40.846717    2156 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:23:40.859720    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:40.859720    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-205800 minikube.k8s.io/updated_at=2024_09_23T10_23_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-205800 minikube.k8s.io/primary=true
	I0923 10:23:40.862912    2156 ops.go:34] apiserver oom_adj: -16
	I0923 10:23:41.094299    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:41.593606    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:42.094334    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:42.592471    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:43.093140    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:43.594490    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:44.093501    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:44.595183    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:45.094144    2156 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:23:45.276230    2156 kubeadm.go:1113] duration metric: took 4.429304s to wait for elevateKubeSystemPrivileges
	I0923 10:23:45.276230    2156 kubeadm.go:394] duration metric: took 20.7389881s to StartCluster
	I0923 10:23:45.276373    2156 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:45.276645    2156 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:23:45.277844    2156 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:23:45.280145    2156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:23:45.280145    2156 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:23:45.280266    2156 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:23:45.280550    2156 addons.go:69] Setting cloud-spanner=true in profile "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:69] Setting metrics-server=true in profile "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:234] Setting addon metrics-server=true in "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:69] Setting ingress-dns=true in profile "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-205800"
	I0923 10:23:45.280605    2156 config.go:182] Loaded profile config "addons-205800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:23:45.280605    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting ingress=true in profile "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:234] Setting addon ingress=true in "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:234] Setting addon cloud-spanner=true in "addons-205800"
	I0923 10:23:45.280550    2156 addons.go:69] Setting yakd=true in profile "addons-205800"
	I0923 10:23:45.281163    2156 addons.go:234] Setting addon yakd=true in "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:69] Setting gcp-auth=true in profile "addons-205800"
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting registry=true in profile "addons-205800"
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.281163    2156 addons.go:234] Setting addon registry=true in "addons-205800"
	I0923 10:23:45.281163    2156 mustload.go:65] Loading cluster: addons-205800
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-205800"
	I0923 10:23:45.281163    2156 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-205800"
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting storage-provisioner=true in profile "addons-205800"
	I0923 10:23:45.281163    2156 addons.go:234] Setting addon storage-provisioner=true in "addons-205800"
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.281163    2156 config.go:182] Loaded profile config "addons-205800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:23:45.280605    2156 addons.go:69] Setting volcano=true in profile "addons-205800"
	I0923 10:23:45.282145    2156 addons.go:234] Setting addon volcano=true in "addons-205800"
	I0923 10:23:45.282145    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.282145    2156 out.go:177] * Verifying Kubernetes components...
	I0923 10:23:45.280605    2156 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-205800"
	I0923 10:23:45.282145    2156 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:69] Setting volumesnapshots=true in profile "addons-205800"
	I0923 10:23:45.283006    2156 addons.go:234] Setting addon volumesnapshots=true in "addons-205800"
	I0923 10:23:45.283190    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting inspektor-gadget=true in profile "addons-205800"
	I0923 10:23:45.283365    2156 addons.go:234] Setting addon inspektor-gadget=true in "addons-205800"
	I0923 10:23:45.280605    2156 addons.go:234] Setting addon ingress-dns=true in "addons-205800"
	I0923 10:23:45.283753    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.281163    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.280605    2156 addons.go:69] Setting default-storageclass=true in profile "addons-205800"
	I0923 10:23:45.283894    2156 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-205800"
	I0923 10:23:45.283528    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.318638    2156 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:23:45.328195    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.329196    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.334191    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.335209    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.336186    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.338869    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.339571    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.340920    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.341044    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.341418    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.343072    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.344058    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.346043    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.348043    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.349050    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.485232    2156 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:23:45.488244    2156 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:23:45.488244    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:23:45.500269    2156 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:23:45.503257    2156 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:23:45.503257    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:23:45.503257    2156 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:23:45.505259    2156 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:23:45.505259    2156 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:23:45.506240    2156 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:23:45.508255    2156 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:23:45.508255    2156 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:23:45.508255    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.514774    2156 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-205800"
	I0923 10:23:45.514824    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.519605    2156 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:23:45.520126    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.522709    2156 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:23:45.522709    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:23:45.522709    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:23:45.533965    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.534979    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.539970    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.540967    2156 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0923 10:23:45.540967    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.542976    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.545005    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:23:45.546975    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.551964    2156 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:23:45.554055    2156 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:23:45.555971    2156 addons.go:234] Setting addon default-storageclass=true in "addons-205800"
	I0923 10:23:45.556972    2156 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0923 10:23:45.556972    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:23:45.560974    2156 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:23:45.558976    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.559976    2156 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 10:23:45.559976    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:23:45.560974    2156 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:23:45.560974    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:23:45.565974    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:23:45.565974    2156 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:23:45.569960    2156 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 10:23:45.572974    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:23:45.572974    2156 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:23:45.572974    2156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:23:45.573983    2156 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 10:23:45.574984    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:23:45.581974    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.583977    2156 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:23:45.583977    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 10:23:45.585998    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.591326    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.595056    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:23:45.599306    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:23:45.599500    2156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:23:45.603493    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:23:45.608490    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.610515    2156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:23:45.616434    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:23:45.617343    2156 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:23:45.617343    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:23:45.623919    2156 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:23:45.639107    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:23:45.639107    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:23:45.639107    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.659109    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.660107    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.663108    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.663108    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.667729    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.681108    2156 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 56905 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 10:23:45.686116    2156 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0923 10:23:45.689125    2156 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:23:45.692099    2156 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:23:45.696113    2156 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:23:45.696113    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:23:45.701125    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.716122    2156 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:23:45.718117    2156 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:23:45.718117    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.720101    2156 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:23:45.720101    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:23:45.722123    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.722123    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.733213    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.737154    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.743167    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.746149    2156 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:23:45.746149    2156 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:23:45.757156    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.758161    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:45.765149    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.802138    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.810145    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:23:45.821143    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	W0923 10:23:45.831171    2156 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:23:45.831239    2156 retry.go:31] will retry after 165.407154ms: ssh: handshake failed: EOF
	W0923 10:23:45.831308    2156 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:23:45.831308    2156 retry.go:31] will retry after 317.639903ms: ssh: handshake failed: EOF
	W0923 10:23:46.028484    2156 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:23:46.028484    2156 retry.go:31] will retry after 519.01804ms: ssh: handshake failed: EOF
	I0923 10:23:46.436384    2156 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1176933s)
	I0923 10:23:46.436512    2156 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.1563118s)
	I0923 10:23:46.436756    2156 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:23:46.454240    2156 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:23:46.752057    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:23:46.933405    2156 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:23:46.933405    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:23:46.937567    2156 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:23:46.937567    2156 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:23:46.941195    2156 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:23:46.941195    2156 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:23:46.961909    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:23:46.961909    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:23:46.962626    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 10:23:46.962685    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:23:46.962685    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:23:47.233584    2156 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:23:47.233584    2156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:23:47.249481    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:23:47.256478    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:23:47.433500    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:23:47.433500    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:23:47.533881    2156 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:23:47.533881    2156 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:23:47.535731    2156 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:23:47.535731    2156 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:23:47.535869    2156 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:23:47.535869    2156 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:23:47.732519    2156 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:23:47.732519    2156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:23:47.934020    2156 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:23:47.934020    2156 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:23:48.029205    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:23:48.029205    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:23:48.131902    2156 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:23:48.131902    2156 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:23:48.131902    2156 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:23:48.131902    2156 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:23:48.232959    2156 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:23:48.232959    2156 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:23:48.232959    2156 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:23:48.232959    2156 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:23:48.546687    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:23:48.628866    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:23:48.628866    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:23:48.630079    2156 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:23:48.630079    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:23:48.634094    2156 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:23:48.634094    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:23:48.832780    2156 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:23:48.832780    2156 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:23:48.832780    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:23:48.832780    2156 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:23:49.149864    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:23:49.230382    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:23:49.230382    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:23:49.335475    2156 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:23:49.335612    2156 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:23:49.429823    2156 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:23:49.430203    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:23:49.448522    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:23:49.738120    2156 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:23:49.738268    2156 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:23:49.850161    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:23:49.933976    2156 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:23:49.933976    2156 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:23:50.233898    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:23:50.233898    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:23:50.533937    2156 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.0968452s)
	I0923 10:23:50.533937    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.7817011s)
	I0923 10:23:50.533937    2156 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.0795039s)
	I0923 10:23:50.533937    2156 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0923 10:23:50.544801    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:50.639581    2156 node_ready.go:35] waiting up to 6m0s for node "addons-205800" to be "Ready" ...
	I0923 10:23:50.735832    2156 node_ready.go:49] node "addons-205800" has status "Ready":"True"
	I0923 10:23:50.735944    2156 node_ready.go:38] duration metric: took 96.247ms for node "addons-205800" to be "Ready" ...
	I0923 10:23:50.735944    2156 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:50.834202    2156 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:23:50.834202    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:23:50.834872    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:23:50.834872    2156 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:23:51.240535    2156 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:51.532796    2156 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-205800" context rescaled to 1 replicas
	I0923 10:23:51.549353    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:23:51.628495    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:23:51.628495    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:23:52.034450    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:23:52.034527    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:23:52.731619    2156 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:23:52.731619    2156 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:23:53.449469    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:23:53.837016    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:55.031903    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.0696121s)
	I0923 10:23:55.032214    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.0698853s)
	I0923 10:23:55.934899    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:57.730006    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.7668113s)
	I0923 10:23:57.731441    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.7668113s)
	I0923 10:23:58.635718    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:59.446735    2156 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:23:59.456426    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:23:59.537300    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:24:00.641648    2156 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:24:00.732676    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:01.028634    2156 addons.go:234] Setting addon gcp-auth=true in "addons-205800"
	I0923 10:24:01.028634    2156 host.go:66] Checking if "addons-205800" exists ...
	I0923 10:24:01.056095    2156 cli_runner.go:164] Run: docker container inspect addons-205800 --format={{.State.Status}}
	I0923 10:24:01.151953    2156 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:24:01.159940    2156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-205800
	I0923 10:24:01.232069    2156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56907 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\addons-205800\id_rsa Username:docker}
	I0923 10:24:03.035596    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:05.243034    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:07.340615    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:09.429401    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:11.531177    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:13.426096    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (26.4620573s)
	I0923 10:24:13.426096    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (26.1683803s)
	I0923 10:24:13.426096    2156 addons.go:475] Verifying addon ingress=true in "addons-205800"
	I0923 10:24:13.426350    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (26.1756307s)
	I0923 10:24:13.427091    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (24.2760787s)
	I0923 10:24:13.427449    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (23.9777929s)
	I0923 10:24:13.428572    2156 addons.go:475] Verifying addon registry=true in "addons-205800"
	I0923 10:24:13.426859    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (24.8789958s)
	I0923 10:24:13.428829    2156 addons.go:475] Verifying addon metrics-server=true in "addons-205800"
	I0923 10:24:13.429343    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (23.5780666s)
	I0923 10:24:13.429893    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (21.8795052s)
	I0923 10:24:13.433197    2156 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-205800 service yakd-dashboard -n yakd-dashboard
	
	W0923 10:24:13.433367    2156 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:24:13.440909    2156 out.go:177] * Verifying ingress addon...
	I0923 10:24:13.443179    2156 out.go:177] * Verifying registry addon...
	I0923 10:24:13.444059    2156 retry.go:31] will retry after 131.335219ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:24:13.451879    2156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:24:13.452881    2156 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:24:13.534077    2156 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:24:13.534077    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.534077    2156 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:24:13.534077    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:13.590142    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:24:13.927980    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:14.036706    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.232100    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:14.540484    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:14.541931    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.048920    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:15.232724    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.541628    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:15.735118    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.037935    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:16.038006    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.130746    2156 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (14.9780847s)
	I0923 10:24:16.130746    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (22.6801663s)
	I0923 10:24:16.130864    2156 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-205800"
	I0923 10:24:16.134047    2156 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:24:16.138885    2156 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:24:16.140518    2156 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:24:16.146158    2156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:24:16.146158    2156 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:24:16.146158    2156 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:24:16.252957    2156 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:24:16.252957    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.253249    2156 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:24:16.253294    2156 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:24:16.352372    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:16.445781    2156 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:24:16.445781    2156 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:24:16.530067    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:16.531505    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.644479    2156 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:24:16.656271    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.961271    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:16.961741    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.155750    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.529778    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:17.530740    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.733525    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.028127    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:18.029101    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.228953    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.532366    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:18.533084    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.536477    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.9460522s)
	I0923 10:24:18.660155    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.838400    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:19.031122    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.032221    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:19.237020    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.548096    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:19.657483    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.750893    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.931952    2156 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.2873169s)
	I0923 10:24:19.939736    2156 addons.go:475] Verifying addon gcp-auth=true in "addons-205800"
	I0923 10:24:19.942475    2156 out.go:177] * Verifying gcp-auth addon...
	I0923 10:24:19.948030    2156 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:24:19.954813    2156 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:24:19.962227    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.962395    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:20.156251    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:20.465650    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:20.465650    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.656436    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:20.961378    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:20.965412    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.155766    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:21.256553    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:21.461170    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:21.461667    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.656392    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:21.960041    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:21.962052    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.155606    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:22.462389    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:22.463498    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.655912    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:22.964044    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:22.965245    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.156468    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:23.258260    2156 pod_ready.go:103] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:23.464587    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.466935    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:23.654546    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:23.964209    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:23.964896    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.155438    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:24.460216    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:24.462917    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.655070    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:24.961758    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.962905    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:25.155232    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:25.261511    2156 pod_ready.go:93] pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.261511    2156 pod_ready.go:82] duration metric: took 34.019367s for pod "coredns-7c65d6cfc9-jzr26" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.261511    2156 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmslw" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.268824    2156 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-tmslw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tmslw" not found
	I0923 10:24:25.268887    2156 pod_ready.go:82] duration metric: took 7.3755ms for pod "coredns-7c65d6cfc9-tmslw" in "kube-system" namespace to be "Ready" ...
	E0923 10:24:25.268887    2156 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-tmslw" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-tmslw" not found
	I0923 10:24:25.268940    2156 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.281530    2156 pod_ready.go:93] pod "etcd-addons-205800" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.281530    2156 pod_ready.go:82] duration metric: took 12.5898ms for pod "etcd-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.282065    2156 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.336000    2156 pod_ready.go:93] pod "kube-apiserver-addons-205800" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.336076    2156 pod_ready.go:82] duration metric: took 53.9589ms for pod "kube-apiserver-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.336076    2156 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.348045    2156 pod_ready.go:93] pod "kube-controller-manager-addons-205800" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.348157    2156 pod_ready.go:82] duration metric: took 12.0798ms for pod "kube-controller-manager-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.348157    2156 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjbd2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.450575    2156 pod_ready.go:93] pod "kube-proxy-pjbd2" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.450709    2156 pod_ready.go:82] duration metric: took 102.5473ms for pod "kube-proxy-pjbd2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.450709    2156 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.459831    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:25.462573    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:25.656115    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:25.850828    2156 pod_ready.go:93] pod "kube-scheduler-addons-205800" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:25.850828    2156 pod_ready.go:82] duration metric: took 400.1004ms for pod "kube-scheduler-addons-205800" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.850828    2156 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:25.962338    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:25.967547    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:26.156560    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:26.462576    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:26.465695    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:26.658507    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:26.961942    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:26.963531    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:27.156207    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:27.465759    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:27.465759    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:27.656477    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:27.868982    2156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:27.961348    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:27.965854    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:28.155241    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:28.461724    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:28.462373    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:28.656774    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:28.959229    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:28.963836    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:29.156008    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:29.462022    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:29.465064    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:29.656790    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:29.962938    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:29.964111    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:30.160684    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:30.369539    2156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:30.462624    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:30.465548    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:30.662408    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:30.962759    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:30.963075    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:31.154461    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:31.462182    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:31.464708    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:31.655328    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:31.961032    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:31.961123    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:32.156139    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:32.461196    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:32.462234    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:32.657754    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:32.867195    2156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:33.033038    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:33.034457    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:33.156320    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:33.461188    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:33.461834    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:33.654651    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:34.194537    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:34.195579    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:34.196283    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:34.461869    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:34.462640    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:34.659655    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:34.974124    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:34.974729    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:34.978365    2156 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:35.189862    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:35.458889    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:35.461050    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:35.658424    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:35.960499    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:35.961481    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:36.157320    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:36.460564    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:36.462566    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:36.656588    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:36.867592    2156 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:36.867592    2156 pod_ready.go:82] duration metric: took 11.0162426s for pod "nvidia-device-plugin-daemonset-7gt49" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:36.867592    2156 pod_ready.go:39] duration metric: took 46.1294659s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:24:36.867592    2156 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:24:36.884635    2156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:24:36.952616    2156 api_server.go:72] duration metric: took 51.6699059s to wait for apiserver process to appear ...
	I0923 10:24:36.952616    2156 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:24:36.952616    2156 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56906/healthz ...
	I0923 10:24:36.958608    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:36.964604    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:37.027644    2156 api_server.go:279] https://127.0.0.1:56906/healthz returned 200:
	ok
	I0923 10:24:37.033723    2156 api_server.go:141] control plane version: v1.31.1
	I0923 10:24:37.033836    2156 api_server.go:131] duration metric: took 81.2162ms to wait for apiserver health ...
	I0923 10:24:37.033932    2156 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:24:37.056693    2156 system_pods.go:59] 17 kube-system pods found
	I0923 10:24:37.056693    2156 system_pods.go:61] "coredns-7c65d6cfc9-jzr26" [f8990748-9491-4c4f-9874-7b9e1a57d014] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "csi-hostpath-attacher-0" [f97a01d2-02d4-4851-85e6-35912d09f380] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:24:37.056693    2156 system_pods.go:61] "csi-hostpath-resizer-0" [e8c09dfe-63d9-4f87-9664-5efb151db822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:24:37.056693    2156 system_pods.go:61] "csi-hostpathplugin-hjp8t" [a29dd1fb-4837-4801-9066-c35c08ac2e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:24:37.056693    2156 system_pods.go:61] "etcd-addons-205800" [4c58188c-b3a6-495f-af50-66f9e459bfde] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "kube-apiserver-addons-205800" [c21a8cb5-6df8-451c-8171-71fcc7558da7] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "kube-controller-manager-addons-205800" [897332b2-d452-4feb-b549-bc4807fc6e0a] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "kube-ingress-dns-minikube" [48e98020-4c58-419d-a675-0d6cee84401f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 10:24:37.056693    2156 system_pods.go:61] "kube-proxy-pjbd2" [35c5d37e-ac86-4780-98d7-ddc462c009a2] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "kube-scheduler-addons-205800" [db3a6cbf-38ae-4603-9356-32c8edcf8953] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "metrics-server-84c5f94fbc-227ml" [ef418e12-9463-459c-addb-8d5515dc9976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:24:37.056693    2156 system_pods.go:61] "nvidia-device-plugin-daemonset-7gt49" [305cf315-1bc1-4aa8-9a3b-0947e4e7da3c] Running
	I0923 10:24:37.056693    2156 system_pods.go:61] "registry-66c9cd494c-974hj" [80d9e439-0bf0-4a73-89d3-b97a73cfe368] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:24:37.056693    2156 system_pods.go:61] "registry-proxy-lgf7x" [fc398532-2b4f-4f30-bd23-308f1c818be5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:24:37.056693    2156 system_pods.go:61] "snapshot-controller-56fcc65765-74jkh" [63c9e998-0992-4ca0-8e23-f4ede75f8f77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:24:37.056693    2156 system_pods.go:61] "snapshot-controller-56fcc65765-hzxsh" [a0b526eb-a61d-464e-94db-f27284e44c3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:24:37.056693    2156 system_pods.go:61] "storage-provisioner" [ef788f7a-f5b0-47f9-8094-e5354dacf6c3] Running
	I0923 10:24:37.056693    2156 system_pods.go:74] duration metric: took 22.7155ms to wait for pod list to return data ...
	I0923 10:24:37.056693    2156 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:24:37.063730    2156 default_sa.go:45] found service account: "default"
	I0923 10:24:37.063730    2156 default_sa.go:55] duration metric: took 7.0365ms for default service account to be created ...
	I0923 10:24:37.063730    2156 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:24:37.150512    2156 system_pods.go:86] 17 kube-system pods found
	I0923 10:24:37.150512    2156 system_pods.go:89] "coredns-7c65d6cfc9-jzr26" [f8990748-9491-4c4f-9874-7b9e1a57d014] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "csi-hostpath-attacher-0" [f97a01d2-02d4-4851-85e6-35912d09f380] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 10:24:37.150512    2156 system_pods.go:89] "csi-hostpath-resizer-0" [e8c09dfe-63d9-4f87-9664-5efb151db822] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 10:24:37.150512    2156 system_pods.go:89] "csi-hostpathplugin-hjp8t" [a29dd1fb-4837-4801-9066-c35c08ac2e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 10:24:37.150512    2156 system_pods.go:89] "etcd-addons-205800" [4c58188c-b3a6-495f-af50-66f9e459bfde] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "kube-apiserver-addons-205800" [c21a8cb5-6df8-451c-8171-71fcc7558da7] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "kube-controller-manager-addons-205800" [897332b2-d452-4feb-b549-bc4807fc6e0a] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "kube-ingress-dns-minikube" [48e98020-4c58-419d-a675-0d6cee84401f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0923 10:24:37.150512    2156 system_pods.go:89] "kube-proxy-pjbd2" [35c5d37e-ac86-4780-98d7-ddc462c009a2] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "kube-scheduler-addons-205800" [db3a6cbf-38ae-4603-9356-32c8edcf8953] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "metrics-server-84c5f94fbc-227ml" [ef418e12-9463-459c-addb-8d5515dc9976] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 10:24:37.150512    2156 system_pods.go:89] "nvidia-device-plugin-daemonset-7gt49" [305cf315-1bc1-4aa8-9a3b-0947e4e7da3c] Running
	I0923 10:24:37.150512    2156 system_pods.go:89] "registry-66c9cd494c-974hj" [80d9e439-0bf0-4a73-89d3-b97a73cfe368] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 10:24:37.150512    2156 system_pods.go:89] "registry-proxy-lgf7x" [fc398532-2b4f-4f30-bd23-308f1c818be5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 10:24:37.150512    2156 system_pods.go:89] "snapshot-controller-56fcc65765-74jkh" [63c9e998-0992-4ca0-8e23-f4ede75f8f77] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:24:37.150512    2156 system_pods.go:89] "snapshot-controller-56fcc65765-hzxsh" [a0b526eb-a61d-464e-94db-f27284e44c3b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 10:24:37.150512    2156 system_pods.go:89] "storage-provisioner" [ef788f7a-f5b0-47f9-8094-e5354dacf6c3] Running
	I0923 10:24:37.150512    2156 system_pods.go:126] duration metric: took 85.8311ms to wait for k8s-apps to be running ...
	I0923 10:24:37.150512    2156 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:24:37.158507    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:37.166499    2156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:24:37.247285    2156 system_svc.go:56] duration metric: took 96.7685ms WaitForService to wait for kubelet
	I0923 10:24:37.247285    2156 kubeadm.go:582] duration metric: took 51.9645606s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:24:37.247285    2156 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:24:37.260286    2156 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0923 10:24:37.260286    2156 node_conditions.go:123] node cpu capacity is 16
	I0923 10:24:37.260286    2156 node_conditions.go:105] duration metric: took 13.0005ms to run NodePressure ...
	I0923 10:24:37.260286    2156 start.go:241] waiting for startup goroutines ...
	I0923 10:24:37.460838    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:37.461842    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:37.656832    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:37.959873    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:37.961846    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:38.156504    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:38.463121    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:38.463121    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:38.656179    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:38.960174    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:38.962161    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:39.157297    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:39.459328    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:39.462355    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:39.658334    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:39.960905    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:39.960905    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:40.155789    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:40.463630    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:40.467191    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:40.660468    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:40.963094    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:40.964382    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:41.159413    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:41.463390    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:41.464022    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:41.656429    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:41.962342    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:41.965885    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:42.156628    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:42.463298    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:42.465294    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:42.662763    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:42.960970    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:42.963304    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:43.156748    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:43.461782    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:43.462579    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:43.656137    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:43.962266    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:43.963569    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:44.157839    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:44.460146    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:44.465289    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:44.655877    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:44.963661    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:44.963661    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:45.156165    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:45.525747    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:45.527090    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:45.656416    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:45.962972    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:45.964160    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:46.156620    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:46.462335    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:46.463290    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:46.655829    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:46.961924    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:46.964189    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:47.155868    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:47.463689    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:47.463689    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:47.655954    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:47.960079    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:47.962081    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:48.157882    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:48.464392    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:48.466781    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:48.657430    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:48.962185    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:48.965699    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:49.157226    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:49.461604    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:49.461604    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:49.658059    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:49.960677    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:49.960677    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:50.156342    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:50.462873    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:50.465801    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:50.655900    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:50.965470    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:50.965579    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:51.156605    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:51.461860    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:51.462010    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:51.656838    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:51.962478    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:51.966199    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:52.160562    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:52.462925    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:52.464956    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:52.658395    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:52.963667    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:52.963667    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:53.156777    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:53.463419    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:53.463419    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:53.656427    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:53.962627    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:53.964775    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:54.157287    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:54.460942    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:54.462505    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:54.656543    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:54.961435    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:54.962422    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:55.157582    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:55.464604    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:55.464737    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:55.656518    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:55.961757    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:55.965180    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:56.157453    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:56.463732    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:56.466661    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:56.657407    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:56.964424    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:56.964695    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:57.157307    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:57.462556    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:57.466068    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:57.656934    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:57.964328    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:57.965340    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:58.156894    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:58.463170    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:58.463170    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:58.657009    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:58.962729    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:58.962882    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:59.156993    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:59.459799    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:59.461778    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:59.655913    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:59.962007    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:59.964528    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:00.157347    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:00.464069    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:00.464738    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:00.843335    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:00.962856    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:00.963844    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:01.315546    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:01.462113    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:01.462113    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:01.656996    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:01.964378    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:01.964378    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:02.716494    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:02.717034    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:02.717034    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:02.724550    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:03.378896    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:03.378896    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:03.379306    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:03.462062    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:03.464064    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:03.657682    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:03.960487    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:03.962502    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:04.157810    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:04.463143    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:04.464766    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:04.660227    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:04.964937    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:04.964937    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:05.158329    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:05.464101    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:05.466092    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:05.665935    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:05.965283    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:05.966297    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:06.163198    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:06.526498    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:06.526498    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:06.659369    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:06.967559    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:06.969548    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:07.159567    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:07.460965    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:07.465006    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:07.657996    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:07.961065    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:07.964079    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:08.160077    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:08.463097    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:08.464087    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:08.656976    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:08.963315    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:08.965876    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:09.158321    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:09.462071    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:09.463354    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:09.658582    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:09.964708    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:09.966612    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:10.158378    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:10.462308    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:10.465246    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:10.658062    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:10.962509    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:10.966668    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:11.158747    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:11.466783    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:11.467444    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:11.658511    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:11.962948    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:11.965496    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:12.158058    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:12.464102    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:12.467008    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:13.007955    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:13.009563    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:13.011490    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:13.297090    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:13.463549    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:13.465582    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:13.656961    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:13.967198    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:13.967509    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:14.158993    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:14.462578    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:14.464590    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:14.659598    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:14.965448    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:14.967147    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:15.158058    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:15.464335    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:15.465548    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:15.658038    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:15.963738    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:15.964716    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:16.158678    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:16.465103    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:16.465974    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:16.664675    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:16.964853    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:16.965469    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:17.160236    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:17.561266    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:17.562331    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:17.657066    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:17.962002    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:17.963005    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:18.158027    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:18.464567    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:18.465572    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:18.659590    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:18.961251    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:18.962265    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:19.158878    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:19.465597    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:19.466916    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:19.658343    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:19.966788    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:19.967595    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:20.159534    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:20.462460    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:20.467484    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:20.657601    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:20.963892    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:20.966224    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:21.217967    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:21.467323    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:21.471951    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:21.659284    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:21.967060    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:21.967873    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:22.160257    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:22.463310    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:22.464304    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:22.658330    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:22.962320    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:23.020596    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:23.224413    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:23.524280    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:23.525302    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:23.726134    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:24.047213    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:24.048200    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:24.161213    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:24.463260    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:24.464290    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:24.660248    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:24.963245    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:24.964244    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:25.159338    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:25.464009    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:25.464009    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:25.661010    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:25.963341    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:25.964338    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:26.158340    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:26.461563    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:26.465501    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:26.658510    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:26.964892    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:26.965707    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:27.159957    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:27.464943    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:27.465411    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:27.723851    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:27.965575    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:27.965575    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:28.158683    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:28.466420    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:28.467793    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:28.658044    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:28.969383    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:29.018752    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:29.159933    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:29.467688    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:29.468002    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:29.660115    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:29.964341    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:29.966760    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:30.157967    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:30.464829    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:30.468915    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:30.661174    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:30.965384    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:30.967208    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:31.159080    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:31.464293    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:31.467926    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:31.667149    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:31.965628    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:31.967313    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:32.364584    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:32.464280    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:32.465805    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:32.659648    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:32.968522    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:32.968793    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:33.156531    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:33.461891    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:33.464006    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:33.661433    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:33.966527    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:33.976059    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:34.160873    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:34.531791    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:34.533798    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:34.663807    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:34.961956    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:34.961956    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:35.159495    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:35.472779    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:35.474002    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:35.659794    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:35.966068    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:36.018843    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:36.164399    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:36.463763    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:36.463763    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:36.658315    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:36.962782    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:36.963780    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:37.158220    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:37.469305    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:37.469418    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:37.675288    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:37.968535    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:37.969564    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:38.161508    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:38.466176    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:38.466176    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:38.659189    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:38.968148    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:38.968148    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:39.163664    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:39.463722    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:39.465726    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:39.664241    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:39.965025    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:39.966098    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:40.160017    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:40.461700    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:40.520894    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:40.658898    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:40.962504    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:40.966519    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:41.160813    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:41.463708    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:41.467291    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:41.659145    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:41.967458    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:41.968271    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:42.160939    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:42.465156    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:42.465767    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:42.659323    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:42.966081    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:42.967744    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:43.159307    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:43.519970    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:43.519970    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:43.660146    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:43.965046    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:43.965799    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:44.159527    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:44.467088    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:44.468421    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:44.659226    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:44.964338    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:44.967377    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:45.158920    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:45.467893    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:45.469776    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:45.661371    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:45.967243    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:45.970178    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:46.159466    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:46.465289    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:46.470748    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:46.661395    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:46.966311    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:46.967866    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:47.160294    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:47.467376    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:47.471202    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:47.665672    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:47.963040    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:47.966047    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:48.160003    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:48.465703    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:48.467693    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:48.662274    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:49.121630    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:49.122155    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:49.158619    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:49.466785    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:49.467356    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:49.683461    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:49.964086    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:49.965132    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:50.163836    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:50.468261    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:50.518220    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:50.662231    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:50.964262    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:50.966317    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:51.163288    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:51.466318    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:51.468302    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:51.663707    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:51.963727    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:51.963727    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:52.159742    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:52.463840    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:52.464871    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:52.658806    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:52.965310    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:52.967298    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:53.164312    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:53.464334    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:53.464334    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:53.720333    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:53.963358    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:53.965341    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:54.159340    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:54.518619    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:54.518898    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:54.659053    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:54.965445    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:54.965964    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:55.159376    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:55.467752    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:55.467752    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:55.659598    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:55.967091    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:55.968953    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:56.160373    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:56.465390    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:56.467069    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:56.660868    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:56.964527    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:56.964527    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:57.156033    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:57.467879    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:57.467879    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:57.658654    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:57.966555    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:57.968015    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:58.160678    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:58.466802    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:58.466802    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:58.660034    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:58.967220    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:58.968167    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:59.160039    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:59.467679    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:59.468337    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:25:59.659123    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:25:59.966756    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:25:59.968434    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:00.164265    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:00.465539    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:00.470081    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:00.741946    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:00.965095    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:00.968941    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:01.159240    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:01.467843    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:01.468338    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:01.660044    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:01.965124    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:01.968558    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:02.160471    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:02.466956    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:02.469955    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:02.659953    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:02.969121    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:02.971720    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:03.163842    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:03.467414    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:03.467841    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:03.659326    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:03.966027    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:03.969242    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:04.159810    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:04.467538    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:04.468275    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:04.660250    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:04.967596    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:04.971561    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:05.160145    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:05.467583    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:05.468237    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:05.663878    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:06.295936    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:06.299419    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:06.300464    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:06.467352    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:06.471282    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:06.662897    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:06.964202    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:06.965774    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:07.159599    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:07.462389    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:26:07.464381    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:07.660662    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:07.964276    2156 kapi.go:107] duration metric: took 1m54.5069807s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:26:07.968271    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:08.161233    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:08.467021    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:08.662012    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:08.966126    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:09.160470    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:09.468021    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:09.660615    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:09.968106    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:10.160100    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:10.468729    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:10.660215    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:10.969037    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:11.160709    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:11.467003    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:11.659085    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:11.967965    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:12.160920    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:12.472380    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:12.661652    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:13.023214    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:13.160234    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:13.466681    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:13.661811    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:14.019959    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:14.161388    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:14.519217    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:14.723322    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:14.972262    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:15.160086    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:15.465223    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:15.661311    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:15.968800    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:16.167060    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:16.528773    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:16.663456    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:16.970102    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:17.160285    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:17.468666    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:17.661186    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:17.970690    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:18.161830    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:18.516779    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:18.662840    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:18.967903    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:19.161682    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:19.467854    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:19.661507    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:19.967577    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:20.164244    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:20.467711    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:20.670290    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:20.967523    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:21.163039    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:21.467799    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:21.688716    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:21.970133    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:22.162312    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:22.466781    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:22.660144    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:22.966779    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:23.171300    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:23.471945    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:23.661970    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:23.968622    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:24.164639    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:24.537103    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:24.662288    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:24.971040    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:25.161328    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:25.523550    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:25.660924    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:25.970739    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:26.162758    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:26.497749    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:26.664513    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:27.023583    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:27.160773    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:27.469201    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:27.664727    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:27.968628    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:28.167677    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:28.473418    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:28.661024    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:29.024819    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:29.161892    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:29.465517    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:29.660794    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:29.970477    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:30.160662    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:30.518119    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:30.662481    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:30.971856    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:31.161724    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:31.603269    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:31.662786    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:31.967032    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:32.224246    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:32.469243    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:32.662581    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:32.969004    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:33.162035    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:33.467078    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:33.664089    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:33.985539    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:34.163156    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:34.470251    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:34.660827    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:34.970317    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:35.162310    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:35.468631    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:35.663581    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:35.968114    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:36.162635    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:36.513933    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:36.660583    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:36.971705    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:37.175741    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:37.470496    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:37.664707    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:37.966515    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:38.164704    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:38.466717    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:38.663361    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:38.967530    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:39.162135    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:39.467640    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:39.662658    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:39.970353    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:40.163489    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:40.470607    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:40.717778    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:40.967956    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:41.216650    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:41.470361    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:41.747745    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:41.969590    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:42.167441    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:42.470820    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:42.661738    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:42.973895    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:43.161900    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:43.467911    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:43.661715    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:43.969021    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:44.164058    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:44.523756    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:44.661435    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:45.013394    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:45.164153    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:45.471904    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:45.661599    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:45.967033    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:46.182111    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:46.471074    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:46.663535    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:46.970753    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:47.165667    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:47.468315    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:47.669315    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:47.970314    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:48.166379    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:48.469361    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:48.661603    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:48.969075    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:49.161448    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:49.467995    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:49.663169    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:49.969504    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:50.163168    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:50.475233    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:50.661994    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:51.028646    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:51.217231    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:51.515978    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:51.661133    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:51.966876    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:52.162869    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:52.517429    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:52.716335    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:52.974842    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:53.162844    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:53.471369    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:53.664190    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:54.012645    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:54.216146    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:54.515464    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:54.719534    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:54.970970    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:55.164958    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:55.472950    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:55.663034    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:55.973225    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:56.163584    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:56.470796    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:56.662006    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:56.966705    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:57.165041    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:57.468152    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:57.662168    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:58.016179    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:58.164356    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:58.471215    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:58.662593    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:58.970067    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:59.164253    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:59.470586    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:26:59.663813    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:26:59.970401    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:00.163335    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:00.469969    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:00.666906    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:00.969238    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:01.165413    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:01.519157    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:01.665726    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:01.971053    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:02.162739    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:02.470555    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:02.667801    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:02.967814    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:03.166969    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:03.469826    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:03.667162    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:03.972178    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:04.163391    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:04.468339    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:04.663539    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:04.969920    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:05.166048    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:05.471215    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:05.662124    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:05.971665    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:06.167861    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:06.469474    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:06.715202    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:06.969908    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:07.257457    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:07.471306    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:07.717250    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:07.973383    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:08.163585    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:08.470741    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:08.669132    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:08.969335    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:09.166281    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:09.470255    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:09.663263    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:09.968266    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:10.164401    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:10.514926    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:10.664790    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:10.971683    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:11.164866    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:11.512727    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:11.663966    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:12.159833    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:12.164519    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:12.475760    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:12.665280    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:27:13.023163    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:13.164307    2156 kapi.go:107] duration metric: took 2m57.0097757s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:27:13.467522    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:13.971839    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:14.472382    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:15.018267    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:15.533786    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:15.972011    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:16.469795    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:16.971279    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:17.470541    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:17.970155    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:18.469152    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:18.970841    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:19.471498    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:19.969079    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:20.470986    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:20.970174    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:21.470722    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:21.970526    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:22.470137    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:22.971097    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:23.470292    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:23.969956    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:24.472461    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:24.970055    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:25.470272    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:25.972333    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:26.470672    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:26.970439    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:27.471590    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:27.972284    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:28.470241    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:28.970415    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:29.472221    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:29.969900    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:30.480293    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:30.971169    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:31.472012    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:31.970120    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:32.471519    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:32.971775    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:33.471128    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:33.971288    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:34.471546    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:34.972428    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:35.481977    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:35.972896    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:36.471768    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:36.970285    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:37.474007    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:37.970969    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:38.471354    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:38.971589    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:39.473124    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:39.972199    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:40.472106    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:40.971946    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:41.472003    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:41.969689    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:42.471416    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:42.972206    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:43.469287    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:43.971987    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:44.469980    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:44.970776    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:45.471969    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:45.972392    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:46.471346    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:46.971453    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:47.473322    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:47.971730    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:48.469951    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:48.973899    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:49.472316    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:49.967979    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:50.471745    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:50.973329    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:51.471788    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:51.970607    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:52.473280    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:52.971572    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:53.472354    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:53.978795    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:54.473545    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:54.973292    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:55.471646    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:55.973736    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:56.473722    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:56.970747    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:57.477219    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:57.970296    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:58.470158    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:58.971875    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:59.473301    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:27:59.972428    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:00.474550    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:00.977204    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:01.475689    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:01.973019    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:02.474035    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:02.971326    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:03.473593    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:03.973140    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:04.472215    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:04.973322    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:05.473661    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:05.973860    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:06.471584    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:06.972040    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:07.473325    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:07.971098    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:08.476302    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:08.973448    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:09.473268    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:09.973552    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:10.472463    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:10.972784    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:11.471815    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:11.973939    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:12.473809    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:12.972877    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:13.472671    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:13.972130    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:14.472908    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:14.973473    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:15.471521    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:16.070203    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:16.508522    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:16.973461    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:17.472211    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:18.069725    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:18.478324    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:18.970963    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:19.474584    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:19.974595    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:20.507418    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:20.972416    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:21.510045    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:21.972028    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:22.474878    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:22.991535    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:23.470810    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:24.030032    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:24.507531    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:25.010327    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:25.507009    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:26.007566    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:26.507557    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:26.976411    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:27.503980    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:27.978132    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:28.476506    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:28.975096    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:29.476187    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:30.004951    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:30.503636    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:30.974891    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:31.475859    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:31.971462    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:32.504471    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:32.976781    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:33.713700    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:33.973362    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:34.505246    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:34.975322    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:35.507549    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:35.975234    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:36.668633    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:37.014989    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:37.477550    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:38.009302    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:38.506527    2156 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:28:38.975674    2156 kapi.go:107] duration metric: took 4m25.5102351s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:29:48.012953    2156 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:29:48.012953    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:48.472665    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:48.971878    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:49.474702    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:49.972091    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:50.502616    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:50.972515    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:51.499267    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:51.973108    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:52.471341    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:52.972091    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:53.469965    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:53.995773    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:54.472589    2156 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:29:54.976128    2156 kapi.go:107] duration metric: took 5m35.012253s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:29:54.978682    2156 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-205800 cluster.
	I0923 10:29:54.981164    2156 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:29:54.984568    2156 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:29:54.986878    2156 out.go:177] * Enabled addons: default-storageclass, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 10:29:54.990069    2156 addons.go:510] duration metric: took 6m9.6923658s for enable addons: enabled=[default-storageclass ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner volcano metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 10:29:54.990069    2156 start.go:246] waiting for cluster config update ...
	I0923 10:29:54.990069    2156 start.go:255] writing updated cluster config ...
	I0923 10:29:55.003209    2156 ssh_runner.go:195] Run: rm -f paused
	I0923 10:29:55.285012    2156 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:29:55.288204    2156 out.go:177] * Done! kubectl is now configured to use "addons-205800" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 10:39:26 addons-205800 dockerd[1373]: time="2024-09-23T10:39:26.841618047Z" level=info msg="ignoring event" container=6ca73e5d2c9af5832e385ba850eee5dd58f6140dd0b30812602753845213ab62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:27 addons-205800 dockerd[1373]: time="2024-09-23T10:39:27.134942999Z" level=info msg="ignoring event" container=a1e750ad7fef2148db52edba768181585bf89ff39159167d0913bb731e0d735f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:28 addons-205800 dockerd[1373]: time="2024-09-23T10:39:28.877349818Z" level=info msg="ignoring event" container=d9e3504733292931769a3139531d644bf68bd28d657f3904148312caba7aaea1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:29 addons-205800 dockerd[1373]: time="2024-09-23T10:39:29.162777968Z" level=info msg="ignoring event" container=e2e7f2447dac63397d58080957c963c5bbf4885fd259e27e356b8246c99f0b49 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:35 addons-205800 dockerd[1373]: time="2024-09-23T10:39:35.505867698Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6261fc5f4f48b354 traceID=10de7e4eb08f7ebbeddb0cc8b3ce5a4e
	Sep 23 10:39:35 addons-205800 dockerd[1373]: time="2024-09-23T10:39:35.519781411Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6261fc5f4f48b354 traceID=10de7e4eb08f7ebbeddb0cc8b3ce5a4e
	Sep 23 10:39:38 addons-205800 dockerd[1373]: time="2024-09-23T10:39:38.971571380Z" level=info msg="ignoring event" container=536d4540c4cc9f377535ff62896b58ff2f7bc4704ba970775f00c23dc1da87dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:39 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:39:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"cloud-spanner-emulator-5b584cc74-wfklx_default\": unexpected command output nsenter: cannot open /proc/3626/ns/net: No such file or directory\n with error: exit status 1"
	Sep 23 10:39:39 addons-205800 dockerd[1373]: time="2024-09-23T10:39:39.289749257Z" level=info msg="ignoring event" container=c872bf2b1c44fd5882f774930b6ebc7ac03a4a1a880dd6bdb15247e6c52e46e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:46 addons-205800 dockerd[1373]: time="2024-09-23T10:39:46.188383572Z" level=info msg="ignoring event" container=1c1bf3d6b2d5396675a52735123c5a7e9cd8e7fb5ad664879fceef14f8f4ea22 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:48 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:39:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5ba66a57b1712d1386b5e5e46c8e007336acafead32ab2412ab034b1953c4804/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 10:39:49 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:39:49Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Sep 23 10:39:53 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0bcd33a59e368a27380e711798e0ebe525c17c3b178418bc71b59c97511ff7f5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 10:39:55 addons-205800 dockerd[1373]: time="2024-09-23T10:39:55.285010628Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=d90a4805cf8d2630c7334f8ba7c71015bda84e3643f46aed613d1e05e8d0b5ab spanID=4fa1a37abe6e2a99 traceID=51a9a0dfb978d6c6d98960a7617640fb
	Sep 23 10:39:55 addons-205800 dockerd[1373]: time="2024-09-23T10:39:55.454164081Z" level=info msg="ignoring event" container=d90a4805cf8d2630c7334f8ba7c71015bda84e3643f46aed613d1e05e8d0b5ab module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:56 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:39:56Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"local-path-provisioner-86d989889c-8dhcw_local-path-storage\": unexpected command output nsenter: cannot open /proc/4433/ns/net: No such file or directory\n with error: exit status 1"
	Sep 23 10:39:56 addons-205800 dockerd[1373]: time="2024-09-23T10:39:56.935501867Z" level=info msg="ignoring event" container=b54fe25bb9d99ef59752680407609d31278885c8e5f71d92630d0e1b64b0dd1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:57 addons-205800 dockerd[1373]: time="2024-09-23T10:39:57.348700370Z" level=info msg="ignoring event" container=bd5a946404d4b8c595771c6edef8ea0a1021eecf8fe5ff6f5797b61da6551eec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:39:57 addons-205800 dockerd[1373]: time="2024-09-23T10:39:57.926946757Z" level=info msg="ignoring event" container=5ba66a57b1712d1386b5e5e46c8e007336acafead32ab2412ab034b1953c4804 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:40:01 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:40:01Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 23 10:40:02 addons-205800 dockerd[1373]: time="2024-09-23T10:40:02.592085318Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=73f43f0468941c9b traceID=8e589644252001731df14b0ea15c5f03
	Sep 23 10:40:02 addons-205800 dockerd[1373]: time="2024-09-23T10:40:02.601786474Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=73f43f0468941c9b traceID=8e589644252001731df14b0ea15c5f03
	Sep 23 10:40:12 addons-205800 dockerd[1373]: time="2024-09-23T10:40:12.270578042Z" level=info msg="ignoring event" container=87736ae0e3dd303447b19592a8f593cf7c3b1b4da223c9950b1115334bb659a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 10:40:13 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:40:13Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/500783b670eb6bac98a4de9f793b4d62714221109b9104604348caf6fac69227/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 23 10:40:14 addons-205800 cri-dockerd[1648]: time="2024-09-23T10:40:14Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	955fff2783ba6       nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3                                                                Less than a second ago   Running             task-pv-container                        0                   500783b670eb6       task-pv-pod-restore
	318af612619d3       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                                13 seconds ago           Running             nginx                                    0                   0bcd33a59e368       nginx
	43c5ce1a8cbe9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 10 minutes ago           Running             gcp-auth                                 0                   7fc7cb09b035a       gcp-auth-89d5ffd79-vhprd
	015d7b5f8f91d       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago           Running             controller                               0                   eb534d5442501       ingress-nginx-controller-bc57996ff-9j8wt
	7cdeeecfd9fb3       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          13 minutes ago           Running             csi-snapshotter                          0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	db6764eee7c16       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          13 minutes ago           Running             csi-provisioner                          0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	91575a83fa948       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            13 minutes ago           Running             liveness-probe                           0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	e0e66159d8c21       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           13 minutes ago           Running             hostpath                                 0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	10eda3319393c       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                13 minutes ago           Running             node-driver-registrar                    0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	b4214f9892a9d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              13 minutes ago           Running             csi-resizer                              0                   84c9ec3f632bf       csi-hostpath-resizer-0
	247de6f6a6ddb       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             13 minutes ago           Running             csi-attacher                             0                   6df5dc11e81f7       csi-hostpath-attacher-0
	e65b5f9867cb4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   13 minutes ago           Running             csi-external-health-monitor-controller   0                   98778bc1d175d       csi-hostpathplugin-hjp8t
	081a12f31bac0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   13 minutes ago           Exited              patch                                    0                   555ce7f87d357       ingress-nginx-admission-patch-mm6jz
	94c5fb2327e8c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   13 minutes ago           Exited              create                                   0                   c0cbe68afa854       ingress-nginx-admission-create-462zg
	caceccbdc8e1e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago           Running             volume-snapshot-controller               0                   e1b30650476f0       snapshot-controller-56fcc65765-74jkh
	01eb5a8860975       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      14 minutes ago           Running             volume-snapshot-controller               0                   0abdc94d7f365       snapshot-controller-56fcc65765-hzxsh
	772dfc324980b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              14 minutes ago           Running             registry-proxy                           0                   3e5bfe698ef80       registry-proxy-lgf7x
	bb4fd7174aa5d       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             14 minutes ago           Running             registry                                 0                   2aa627ea8615c       registry-66c9cd494c-974hj
	86a47580ee5be       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             15 minutes ago           Running             minikube-ingress-dns                     0                   86bd98592865d       kube-ingress-dns-minikube
	c4575f862712b       6e38f40d628db                                                                                                                                16 minutes ago           Running             storage-provisioner                      0                   630ce07a996a6       storage-provisioner
	bb536376d1f27       c69fa2e9cbf5f                                                                                                                                16 minutes ago           Running             coredns                                  0                   c72837bb27bf0       coredns-7c65d6cfc9-jzr26
	7acd636cb6b5b       60c005f310ff3                                                                                                                                16 minutes ago           Running             kube-proxy                               0                   17e0fb184ab41       kube-proxy-pjbd2
	617903fff65fb       6bab7719df100                                                                                                                                16 minutes ago           Running             kube-apiserver                           0                   ae86237d3423b       kube-apiserver-addons-205800
	ea3a25aedf2cb       2e96e5913fc06                                                                                                                                16 minutes ago           Running             etcd                                     0                   2ac805e19bb68       etcd-addons-205800
	c988c1f4ad2a2       175ffd71cce3d                                                                                                                                16 minutes ago           Running             kube-controller-manager                  0                   9fa950adac82b       kube-controller-manager-addons-205800
	5ac15dc55c867       9aa1fad941575                                                                                                                                16 minutes ago           Running             kube-scheduler                           0                   22773cfadafba       kube-scheduler-addons-205800
	
	
	==> controller_ingress [015d7b5f8f91] <==
	I0923 10:28:38.515933       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"cdeeb2bd-6b96-4944-936d-16e0ad1d4916", APIVersion:"v1", ResourceVersion:"678", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0923 10:28:39.699577       8 nginx.go:317] "Starting NGINX process"
	I0923 10:28:39.699990       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0923 10:28:39.700865       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0923 10:28:39.701382       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 10:28:39.717198       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0923 10:28:39.717291       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-9j8wt"
	I0923 10:28:39.737064       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-9j8wt" node="addons-205800"
	I0923 10:28:39.759608       8 controller.go:213] "Backend successfully reloaded"
	I0923 10:28:39.759881       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0923 10:28:39.760132       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9j8wt", UID:"88ae8ebd-098f-4354-ad0c-1bf6693b5c36", APIVersion:"v1", ResourceVersion:"771", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0923 10:39:52.084119       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0923 10:39:52.129897       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.045s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:0.046s testedConfigurationSize:18.1kB}
	I0923 10:39:52.130009       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0923 10:39:52.137883       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I0923 10:39:52.138484       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"f66dcdb0-f508-474a-a767-7a14e0e7aa96", APIVersion:"networking.k8s.io/v1", ResourceVersion:"3060", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W0923 10:39:52.138905       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0923 10:39:52.139059       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 10:39:52.251891       8 controller.go:213] "Backend successfully reloaded"
	I0923 10:39:52.252628       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9j8wt", UID:"88ae8ebd-098f-4354-ad0c-1bf6693b5c36", APIVersion:"v1", ResourceVersion:"771", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0923 10:39:55.524736       8 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0923 10:39:55.524897       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0923 10:39:55.678042       8 controller.go:213] "Backend successfully reloaded"
	I0923 10:39:55.683695       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-9j8wt", UID:"88ae8ebd-098f-4354-ad0c-1bf6693b5c36", APIVersion:"v1", ResourceVersion:"771", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	10.244.0.1 - - [23/Sep/2024:10:40:08 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.002 [default-nginx-80] [] 10.244.0.36:80 615 0.002 200 bfef09b03c7be81d796a54f4ce22fd94
	
	
	==> coredns [bb536376d1f2] <==
	[INFO] 127.0.0.1:60931 - 10663 "HINFO IN 7372535687978560146.4970145210292455733. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.038457327s
	[INFO] 10.244.0.8:48351 - 33880 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00045526s
	[INFO] 10.244.0.8:48351 - 3164 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000731596s
	[INFO] 10.244.0.8:58038 - 19857 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000344846s
	[INFO] 10.244.0.8:58038 - 2706 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000772402s
	[INFO] 10.244.0.8:49368 - 48048 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000175324s
	[INFO] 10.244.0.8:49368 - 40629 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000373249s
	[INFO] 10.244.0.8:33958 - 28870 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000245532s
	[INFO] 10.244.0.8:33958 - 65477 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000295239s
	[INFO] 10.244.0.8:44128 - 2321 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000196225s
	[INFO] 10.244.0.8:44128 - 59922 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000384151s
	[INFO] 10.244.0.8:41444 - 28263 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000144219s
	[INFO] 10.244.0.8:41444 - 61339 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000328444s
	[INFO] 10.244.0.8:35324 - 17866 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015362s
	[INFO] 10.244.0.8:35324 - 45001 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000324443s
	[INFO] 10.244.0.8:51453 - 15347 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139919s
	[INFO] 10.244.0.8:51453 - 62206 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000206627s
	[INFO] 10.244.0.25:56975 - 19777 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000439962s
	[INFO] 10.244.0.25:49132 - 64434 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000305543s
	[INFO] 10.244.0.25:49105 - 63823 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000473167s
	[INFO] 10.244.0.25:42071 - 62022 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000727303s
	[INFO] 10.244.0.25:49338 - 50068 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101915s
	[INFO] 10.244.0.25:54548 - 29029 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000330747s
	[INFO] 10.244.0.25:46742 - 48417 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 230 0.010384767s
	[INFO] 10.244.0.25:32889 - 26395 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.010456777s
	
	
	==> describe nodes <==
	Name:               addons-205800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-205800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-205800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_23_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-205800
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-205800"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:23:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-205800
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:40:11 +0000   Mon, 23 Sep 2024 10:23:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:40:11 +0000   Mon, 23 Sep 2024 10:23:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:40:11 +0000   Mon, 23 Sep 2024 10:23:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:40:11 +0000   Mon, 23 Sep 2024 10:23:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-205800
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9f61fe7700e43d491d8cf378b45f526
	  System UUID:                d9f61fe7700e43d491d8cf378b45f526
	  Boot ID:                    d450b61c-b7f5-4a84-8b7a-3c24688adc16
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m23s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     task-pv-pod-restore                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  gcp-auth                    gcp-auth-89d5ffd79-vhprd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-9j8wt    100m (0%)     0 (0%)      90Mi (0%)        0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-jzr26                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 csi-hostpathplugin-hjp8t                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 etcd-addons-205800                          100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kube-apiserver-addons-205800                250m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-205800       200m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-pjbd2                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-205800                100m (0%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-66c9cd494c-974hj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 registry-proxy-lgf7x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-74jkh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 snapshot-controller-56fcc65765-hzxsh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age                From             Message
	  ----     ------                             ----               ----             -------
	  Normal   Starting                           16m                kube-proxy       
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            16m (x7 over 16m)  kubelet          Node addons-205800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m (x7 over 16m)  kubelet          Node addons-205800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m (x7 over 16m)  kubelet          Node addons-205800 status is now: NodeHasSufficientPID
	  Warning  PossibleMemoryBackedVolumesOnDisk  16m                kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           16m                kubelet          Starting kubelet.
	  Warning  CgroupV1                           16m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            16m                kubelet          Node addons-205800 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              16m                kubelet          Node addons-205800 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               16m                kubelet          Node addons-205800 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     16m                node-controller  Node addons-205800 event: Registered Node addons-205800 in Controller
	
	
	==> dmesg <==
	[  +0.001144] FS-Cache: O-cookie c=0000000d [p=00000002 fl=222 nc=0 na=1]
	[  +0.001516] FS-Cache: O-cookie d=0000000055a2309c{9P.session} n=000000005c3feac2
	[  +0.001548] FS-Cache: O-key=[10] '34323934393337363439'
	[  +0.001032] FS-Cache: N-cookie c=0000000e [p=00000002 fl=2 nc=0 na=1]
	[  +0.001336] FS-Cache: N-cookie d=0000000055a2309c{9P.session} n=00000000cc673d76
	[  +0.001558] FS-Cache: N-key=[10] '34323934393337363439'
	[  +0.014149] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.525136] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +1.748684] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002212] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002792] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.005280] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.006699] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001956] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004810] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002117] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.069743] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.111070] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.964638] netlink: 'init': attribute type 4 has an invalid length.
	[Sep23 10:23] tmpfs: Unknown parameter 'noswap'
	[ +10.369228] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [ea3a25aedf2c] <==
	{"level":"info","ts":"2024-09-23T10:30:33.157699Z","caller":"traceutil/trace.go:171","msg":"trace[1224592572] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1814; }","duration":"698.286604ms","start":"2024-09-23T10:30:32.459398Z","end":"2024-09-23T10:30:33.157685Z","steps":["trace[1224592572] 'range keys from in-memory index tree'  (duration: 698.11618ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:30:33.158093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"684.254645ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:30:33.158203Z","caller":"traceutil/trace.go:171","msg":"trace[321158865] transaction","detail":"{read_only:false; response_revision:1815; number_of_response:1; }","duration":"708.546837ms","start":"2024-09-23T10:30:32.449638Z","end":"2024-09-23T10:30:33.158185Z","steps":["trace[321158865] 'process raft request'  (duration: 657.459304ms)","trace[321158865] 'compare'  (duration: 50.976818ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:30:33.158223Z","caller":"traceutil/trace.go:171","msg":"trace[1483044633] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1814; }","duration":"684.381863ms","start":"2024-09-23T10:30:32.473828Z","end":"2024-09-23T10:30:33.158210Z","steps":["trace[1483044633] 'range keys from in-memory index tree'  (duration: 684.246443ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:30:33.158312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:30:32.449620Z","time spent":"708.626848ms","remote":"127.0.0.1:54314","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1802 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-09-23T10:30:33.827999Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.54264ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:30:33.828184Z","caller":"traceutil/trace.go:171","msg":"trace[1146698502] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1815; }","duration":"353.734666ms","start":"2024-09-23T10:30:33.474428Z","end":"2024-09-23T10:30:33.828163Z","steps":["trace[1146698502] 'range keys from in-memory index tree'  (duration: 353.527639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:30:33.828265Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"528.389138ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-23T10:30:33.828299Z","caller":"traceutil/trace.go:171","msg":"trace[2116203604] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1815; }","duration":"528.426143ms","start":"2024-09-23T10:30:33.299864Z","end":"2024-09-23T10:30:33.828290Z","steps":["trace[2116203604] 'range keys from in-memory index tree'  (duration: 528.284623ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:30:33.828322Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:30:33.299833Z","time spent":"528.48255ms","remote":"127.0.0.1:54314","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":523,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-09-23T10:30:42.187536Z","caller":"traceutil/trace.go:171","msg":"trace[886491365] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1859; }","duration":"100.717619ms","start":"2024-09-23T10:30:42.086785Z","end":"2024-09-23T10:30:42.187503Z","steps":["trace[886491365] 'compare'  (duration: 92.266827ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:30:43.203922Z","caller":"traceutil/trace.go:171","msg":"trace[1498640239] transaction","detail":"{read_only:false; response_revision:1881; number_of_response:1; }","duration":"111.337019ms","start":"2024-09-23T10:30:43.092568Z","end":"2024-09-23T10:30:43.203905Z","steps":["trace[1498640239] 'process raft request'  (duration: 94.074281ms)","trace[1498640239] 'compare'  (duration: 17.086213ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:30:44.487905Z","caller":"traceutil/trace.go:171","msg":"trace[899110063] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/jobflows.flow.volcano.sh; range_end:; response_count:1; response_revision:1906; }","duration":"100.000818ms","start":"2024-09-23T10:30:44.387872Z","end":"2024-09-23T10:30:44.487873Z","steps":["trace[899110063] 'range keys from in-memory index tree'  (duration: 99.93851ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:30:44.491182Z","caller":"traceutil/trace.go:171","msg":"trace[2038366536] transaction","detail":"{read_only:false; response_revision:1907; number_of_response:1; }","duration":"102.723302ms","start":"2024-09-23T10:30:44.388194Z","end":"2024-09-23T10:30:44.490917Z","steps":["trace[2038366536] 'process raft request'  (duration: 22.720007ms)","trace[2038366536] 'compare'  (duration: 76.761137ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:30:44.491340Z","caller":"traceutil/trace.go:171","msg":"trace[1285324536] transaction","detail":"{read_only:false; response_revision:1908; number_of_response:1; }","duration":"101.244794ms","start":"2024-09-23T10:30:44.390075Z","end":"2024-09-23T10:30:44.491320Z","steps":["trace[1285324536] 'process raft request'  (duration: 100.756425ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:30:44.491542Z","caller":"traceutil/trace.go:171","msg":"trace[1872971118] linearizableReadLoop","detail":"{readStateIndex:2014; appliedIndex:2012; }","duration":"101.356309ms","start":"2024-09-23T10:30:44.390142Z","end":"2024-09-23T10:30:44.491499Z","steps":["trace[1872971118] 'read index received'  (duration: 20.848643ms)","trace[1872971118] 'applied index is now lower than readState.Index'  (duration: 80.504366ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:30:44.492625Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.768526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1114"}
	{"level":"info","ts":"2024-09-23T10:30:44.492671Z","caller":"traceutil/trace.go:171","msg":"trace[1794947039] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1909; }","duration":"100.823635ms","start":"2024-09-23T10:30:44.391835Z","end":"2024-09-23T10:30:44.492658Z","steps":["trace[1794947039] 'agreement among raft nodes before linearized reading'  (duration: 100.689916ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:33:34.944794Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1525}
	{"level":"info","ts":"2024-09-23T10:33:34.992876Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1525,"took":"47.538558ms","hash":266100928,"current-db-size-bytes":9191424,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":5386240,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-23T10:33:34.993002Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":266100928,"revision":1525,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:38:34.924379Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2240}
	{"level":"info","ts":"2024-09-23T10:38:34.966724Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2240,"took":"41.782149ms","hash":774248184,"current-db-size-bytes":9191424,"current-db-size":"9.2 MB","current-db-size-in-use-bytes":3817472,"current-db-size-in-use":"3.8 MB"}
	{"level":"info","ts":"2024-09-23T10:38:34.966871Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":774248184,"revision":2240,"compact-revision":1525}
	{"level":"info","ts":"2024-09-23T10:39:52.929836Z","caller":"traceutil/trace.go:171","msg":"trace[949988985] transaction","detail":"{read_only:false; response_revision:3065; number_of_response:1; }","duration":"101.732703ms","start":"2024-09-23T10:39:52.828089Z","end":"2024-09-23T10:39:52.929822Z","steps":["trace[949988985] 'process raft request'  (duration: 101.669593ms)"],"step_count":1}
	
	
	==> gcp-auth [43c5ce1a8cbe] <==
	2024/09/23 10:30:51 Ready to write response ...
	2024/09/23 10:30:52 Ready to marshal response ...
	2024/09/23 10:30:52 Ready to write response ...
	2024/09/23 10:30:52 Ready to marshal response ...
	2024/09/23 10:30:52 Ready to write response ...
	2024/09/23 10:39:00 Ready to marshal response ...
	2024/09/23 10:39:00 Ready to write response ...
	2024/09/23 10:39:00 Ready to marshal response ...
	2024/09/23 10:39:00 Ready to write response ...
	2024/09/23 10:39:00 Ready to marshal response ...
	2024/09/23 10:39:00 Ready to write response ...
	2024/09/23 10:39:01 Ready to marshal response ...
	2024/09/23 10:39:01 Ready to write response ...
	2024/09/23 10:39:01 Ready to marshal response ...
	2024/09/23 10:39:01 Ready to write response ...
	2024/09/23 10:39:11 Ready to marshal response ...
	2024/09/23 10:39:11 Ready to write response ...
	2024/09/23 10:39:23 Ready to marshal response ...
	2024/09/23 10:39:23 Ready to write response ...
	2024/09/23 10:39:47 Ready to marshal response ...
	2024/09/23 10:39:47 Ready to write response ...
	2024/09/23 10:39:52 Ready to marshal response ...
	2024/09/23 10:39:52 Ready to write response ...
	2024/09/23 10:40:12 Ready to marshal response ...
	2024/09/23 10:40:12 Ready to write response ...
	
	
	==> kernel <==
	 10:40:15 up 13:28,  0 users,  load average: 1.24, 0.85, 0.82
	Linux addons-205800 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [617903fff65f] <==
	I0923 10:30:13.094566       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 10:30:41.699361       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 10:30:41.803822       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	W0923 10:30:42.988456       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0923 10:30:43.045135       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:30:43.207592       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:30:43.289238       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 10:30:43.409709       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 10:30:44.008009       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 10:30:44.106425       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 10:30:44.108680       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 10:30:44.410215       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	I0923 10:30:44.515139       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 10:30:44.601974       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 10:30:44.615891       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 10:30:45.600910       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 10:30:45.792434       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 10:39:01.011682       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.91.129"}
	I0923 10:39:28.634913       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0923 10:39:40.881526       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:39:46.050977       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:39:47.087182       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:39:52.131202       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:39:52.947957       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.242.147"}
	I0923 10:39:55.209226       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c988c1f4ad2a] <==
	W0923 10:39:41.898675       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:41.898783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:39:47.089530       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:39:48.481533       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:48.481650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:39:50.655665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-205800"
	W0923 10:39:51.117337       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:51.117466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:39:52.626785       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:52.626870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:39:53.028410       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:53.028759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:39:54.537697       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:39:54.537814       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:39:56.333122       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0923 10:40:03.649925       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:40:03.650196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:40:04.531515       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:40:04.531622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:40:05.556244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:40:05.556349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:40:11.065615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-205800"
	I0923 10:40:13.492922       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0923 10:40:14.939460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:40:14.939599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [7acd636cb6b5] <==
	E0923 10:23:56.124956       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 10:23:56.224712       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 10:23:56.247287       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:23:57.130086       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:23:57.130234       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:23:57.826681       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:23:57.826819       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:23:57.834480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 10:23:57.924585       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 10:23:58.025946       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 10:23:58.026808       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:23:58.026892       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:23:58.028968       1 config.go:199] "Starting service config controller"
	I0923 10:23:58.029016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:23:58.029063       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:23:58.029103       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:23:58.029100       1 config.go:328] "Starting node config controller"
	I0923 10:23:58.029146       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:23:58.130208       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:23:58.130394       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:23:58.131830       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5ac15dc55c86] <==
	W0923 10:23:37.608445       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 10:23:37.608550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.640368       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 10:23:37.640474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.712460       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:23:37.712571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.715441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:23:37.715535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.761698       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:23:37.761804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.798735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 10:23:37.798903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.800280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:23:37.800409       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.940066       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:23:37.940171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.964290       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:23:37.964334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:37.999009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:23:37.999105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:38.044870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:23:38.044985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:23:38.209117       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:23:38.209211       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 10:23:40.043073       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:39:58 addons-205800 kubelet[2565]: I0923 10:39:58.643615    2565 reconciler_common.go:288] "Volume detached for volume \"pvc-ada67971-e819-40c6-bf13-33c0eb907af3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^2721e54a-7998-11ef-93ca-1ac4fe8e9d4f\") on node \"addons-205800\" DevicePath \"\""
	Sep 23 10:40:00 addons-205800 kubelet[2565]: I0923 10:40:00.366417    2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1a4b46-014f-44f2-ae78-7087c32e2f8f" path="/var/lib/kubelet/pods/7a1a4b46-014f-44f2-ae78-7087c32e2f8f/volumes"
	Sep 23 10:40:02 addons-205800 kubelet[2565]: I0923 10:40:02.463100    2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=2.868001789 podStartE2EDuration="10.46282324s" podCreationTimestamp="2024-09-23 10:39:52 +0000 UTC" firstStartedPulling="2024-09-23 10:39:53.718772436 +0000 UTC m=+973.764892827" lastFinishedPulling="2024-09-23 10:40:01.313593887 +0000 UTC m=+981.359714278" observedRunningTime="2024-09-23 10:40:02.461006586 +0000 UTC m=+982.507126977" watchObservedRunningTime="2024-09-23 10:40:02.46282324 +0000 UTC m=+982.508943531"
	Sep 23 10:40:02 addons-205800 kubelet[2565]: E0923 10:40:02.603404    2565 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 23 10:40:02 addons-205800 kubelet[2565]: E0923 10:40:02.603676    2565 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-szp8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,Su
bPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(15970a74-afc7-47e2-ba1c-e765fae1d2e9): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 23 10:40:02 addons-205800 kubelet[2565]: E0923 10:40:02.605033    2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="15970a74-afc7-47e2-ba1c-e765fae1d2e9"
	Sep 23 10:40:03 addons-205800 kubelet[2565]: E0923 10:40:03.353610    2565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="9d99fadc-8193-48bc-8ebf-321a4e06b83b"
	Sep 23 10:40:10 addons-205800 kubelet[2565]: I0923 10:40:10.349222    2565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-974hj" secret="" err="secret \"gcp-auth\" not found"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.513046    2565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/15970a74-afc7-47e2-ba1c-e765fae1d2e9-gcp-creds\") pod \"15970a74-afc7-47e2-ba1c-e765fae1d2e9\" (UID: \"15970a74-afc7-47e2-ba1c-e765fae1d2e9\") "
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.513176    2565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szp8j\" (UniqueName: \"kubernetes.io/projected/15970a74-afc7-47e2-ba1c-e765fae1d2e9-kube-api-access-szp8j\") pod \"15970a74-afc7-47e2-ba1c-e765fae1d2e9\" (UID: \"15970a74-afc7-47e2-ba1c-e765fae1d2e9\") "
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.513318    2565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15970a74-afc7-47e2-ba1c-e765fae1d2e9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "15970a74-afc7-47e2-ba1c-e765fae1d2e9" (UID: "15970a74-afc7-47e2-ba1c-e765fae1d2e9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.516399    2565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15970a74-afc7-47e2-ba1c-e765fae1d2e9-kube-api-access-szp8j" (OuterVolumeSpecName: "kube-api-access-szp8j") pod "15970a74-afc7-47e2-ba1c-e765fae1d2e9" (UID: "15970a74-afc7-47e2-ba1c-e765fae1d2e9"). InnerVolumeSpecName "kube-api-access-szp8j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:40:12 addons-205800 kubelet[2565]: E0923 10:40:12.529012    2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a1a4b46-014f-44f2-ae78-7087c32e2f8f" containerName="task-pv-container"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: E0923 10:40:12.529118    2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ff745530-4c88-4c71-b3b1-152204728cef" containerName="local-path-provisioner"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: E0923 10:40:12.529130    2565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="274df95a-cd8a-4e15-8fcf-f56c51ccffcf" containerName="gadget"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.529181    2565 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1a4b46-014f-44f2-ae78-7087c32e2f8f" containerName="task-pv-container"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.529189    2565 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff745530-4c88-4c71-b3b1-152204728cef" containerName="local-path-provisioner"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.613781    2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgwcp\" (UniqueName: \"kubernetes.io/projected/58bfe176-21a6-4f5f-91ff-9cbf53bf9df5-kube-api-access-mgwcp\") pod \"task-pv-pod-restore\" (UID: \"58bfe176-21a6-4f5f-91ff-9cbf53bf9df5\") " pod="default/task-pv-pod-restore"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.613949    2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/58bfe176-21a6-4f5f-91ff-9cbf53bf9df5-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"58bfe176-21a6-4f5f-91ff-9cbf53bf9df5\") " pod="default/task-pv-pod-restore"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.613995    2565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9c20dad-bf97-4118-b696-eb70544c1822\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3593a363-7998-11ef-93ca-1ac4fe8e9d4f\") pod \"task-pv-pod-restore\" (UID: \"58bfe176-21a6-4f5f-91ff-9cbf53bf9df5\") " pod="default/task-pv-pod-restore"
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.614022    2565 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/15970a74-afc7-47e2-ba1c-e765fae1d2e9-gcp-creds\") on node \"addons-205800\" DevicePath \"\""
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.614035    2565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-szp8j\" (UniqueName: \"kubernetes.io/projected/15970a74-afc7-47e2-ba1c-e765fae1d2e9-kube-api-access-szp8j\") on node \"addons-205800\" DevicePath \"\""
	Sep 23 10:40:12 addons-205800 kubelet[2565]: I0923 10:40:12.730079    2565 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9c20dad-bf97-4118-b696-eb70544c1822\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3593a363-7998-11ef-93ca-1ac4fe8e9d4f\") pod \"task-pv-pod-restore\" (UID: \"58bfe176-21a6-4f5f-91ff-9cbf53bf9df5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/ba06be038b9fded0409d2fee06847b2425d475bc4e52c739681418e122ae1e28/globalmount\"" pod="default/task-pv-pod-restore"
	Sep 23 10:40:14 addons-205800 kubelet[2565]: I0923 10:40:14.367790    2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15970a74-afc7-47e2-ba1c-e765fae1d2e9" path="/var/lib/kubelet/pods/15970a74-afc7-47e2-ba1c-e765fae1d2e9/volumes"
	Sep 23 10:40:14 addons-205800 kubelet[2565]: I0923 10:40:14.760912    2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=2.139703356 podStartE2EDuration="2.760883928s" podCreationTimestamp="2024-09-23 10:40:12 +0000 UTC" firstStartedPulling="2024-09-23 10:40:13.414632546 +0000 UTC m=+993.463908036" lastFinishedPulling="2024-09-23 10:40:14.035813118 +0000 UTC m=+994.085088608" observedRunningTime="2024-09-23 10:40:14.757737124 +0000 UTC m=+994.807012714" watchObservedRunningTime="2024-09-23 10:40:14.760883928 +0000 UTC m=+994.810159518"
	
	
	==> storage-provisioner [c4575f862712] <==
	I0923 10:24:08.425810       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:24:08.536435       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:24:08.536497       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:24:08.743994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:24:08.744232       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-205800_205a690a-a47b-4454-9d08-497941a48c54!
	I0923 10:24:08.823388       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23ed8e17-c8f4-47ad-92ff-bb54bfd65801", APIVersion:"v1", ResourceVersion:"804", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-205800_205a690a-a47b-4454-9d08-497941a48c54 became leader
	I0923 10:24:08.924775       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-205800_205a690a-a47b-4454-9d08-497941a48c54!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-205800 -n addons-205800
helpers_test.go:261: (dbg) Run:  kubectl --context addons-205800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-462zg ingress-nginx-admission-patch-mm6jz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-205800 describe pod busybox ingress-nginx-admission-create-462zg ingress-nginx-admission-patch-mm6jz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-205800 describe pod busybox ingress-nginx-admission-create-462zg ingress-nginx-admission-patch-mm6jz: exit status 1 (266.9229ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-205800/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:30:52 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vhzfm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vhzfm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m26s                   default-scheduler  Successfully assigned default/busybox to addons-205800
	  Normal   Pulling    8m2s (x4 over 9m26s)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     8m2s (x4 over 9m25s)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     8m2s (x4 over 9m25s)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m36s (x6 over 9m25s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m22s (x20 over 9m25s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-462zg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mm6jz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-205800 describe pod busybox ingress-nginx-admission-create-462zg ingress-nginx-admission-patch-mm6jz: exit status 1
--- FAIL: TestAddons/parallel/Registry (78.67s)

                                                
                                    
x
+
TestErrorSpam/setup (61.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-232800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-232800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 --driver=docker: (1m1.9853283s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-232800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=19689
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-232800" primary control-plane node in "nospam-232800" cluster
* Pulling base image v0.0.45-1726784731-19672 ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-232800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (61.99s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (5.32s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-734700
helpers_test.go:235: (dbg) docker inspect functional-734700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2",
	        "Created": "2024-09-23T10:42:51.753719185Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 28049,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:42:52.07112247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2/hostname",
	        "HostsPath": "/var/lib/docker/containers/bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2/hosts",
	        "LogPath": "/var/lib/docker/containers/bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2/bbb3a01aac01bac15657ba47a4c87341802b860e97da94ea4329dc9e51353ce2-json.log",
	        "Name": "/functional-734700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-734700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-734700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/08316ddfe288a7ba0a8e2f1e27bad913968bcf83f059a9dce56a7bc5d176964a-init/diff:/var/lib/docker/overlay2/45a1d176e43ae6a4b4b413b83d6ac02867e558bd9182f31de6a362b3112ed40d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/08316ddfe288a7ba0a8e2f1e27bad913968bcf83f059a9dce56a7bc5d176964a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/08316ddfe288a7ba0a8e2f1e27bad913968bcf83f059a9dce56a7bc5d176964a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/08316ddfe288a7ba0a8e2f1e27bad913968bcf83f059a9dce56a7bc5d176964a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-734700",
	                "Source": "/var/lib/docker/volumes/functional-734700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-734700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-734700",
	                "name.minikube.sigs.k8s.io": "functional-734700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e5399fcd34d54faf5354561a47e8271a5483160f1e5bb0633a803a18b4585cf1",
	            "SandboxKey": "/var/run/docker/netns/e5399fcd34d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57731"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57732"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57733"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57729"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57730"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-734700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "89f4dc667878d0759e09e5ffbf3b8f66a31c9795ea77c5b2bf75f28b264491ce",
	                    "EndpointID": "1688cd539352d0be24bd909008d3df1d53826ab276d03fcb5cb3a9dee577bb0b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-734700",
	                        "bbb3a01aac01"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-734700 -n functional-734700
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 logs -n 25
E0923 10:45:00.536497    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 logs -n 25: (2.3896036s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-232800 --log_dir                                     | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	|         | C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-232800                                            | nospam-232800     | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:42 UTC |
	| start   | -p functional-734700                                        | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:42 UTC | 23 Sep 24 10:43 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=docker                                  |                   |                   |         |                     |                     |
	| start   | -p functional-734700                                        | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:43 UTC | 23 Sep 24 10:44 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache add                                 | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache add                                 | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache add                                 | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache add                                 | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | minikube-local-cache-test:functional-734700                 |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache delete                              | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | minikube-local-cache-test:functional-734700                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	| ssh     | functional-734700 ssh sudo                                  | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-734700                                           | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-734700 ssh                                       | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-734700 cache reload                              | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	| ssh     | functional-734700 ssh                                       | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-734700 kubectl --                                | functional-734700 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:44 UTC | 23 Sep 24 10:44 UTC |
	|         | --context functional-734700                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:43:58
	Running on machine: minikube4
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:43:58.447063    6872 out.go:345] Setting OutFile to fd 920 ...
	I0923 10:43:58.525796    6872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:43:58.525855    6872 out.go:358] Setting ErrFile to fd 900...
	I0923 10:43:58.525855    6872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:43:58.544110    6872 out.go:352] Setting JSON to false
	I0923 10:43:58.547201    6872 start.go:129] hostinfo: {"hostname":"minikube4","uptime":48801,"bootTime":1727039436,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:43:58.547393    6872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:43:58.551197    6872 out.go:177] * [functional-734700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:43:58.554487    6872 notify.go:220] Checking for updates...
	I0923 10:43:58.556124    6872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:43:58.559966    6872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:43:58.562255    6872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:43:58.563840    6872 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:43:58.567935    6872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:43:58.570546    6872 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:43:58.570546    6872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:43:58.749074    6872 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:43:58.758563    6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:43:59.077956    6872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-23 10:43:59.049845787 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:43:59.081337    6872 out.go:177] * Using the docker driver based on existing profile
	I0923 10:43:59.083196    6872 start.go:297] selected driver: docker
	I0923 10:43:59.083297    6872 start.go:901] validating driver "docker" against &{Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:43:59.083626    6872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:43:59.101082    6872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:43:59.418839    6872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:81 SystemTime:2024-09-23 10:43:59.394572363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:43:59.523162    6872 cni.go:84] Creating CNI manager for ""
	I0923 10:43:59.523234    6872 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:43:59.523464    6872 start.go:340] cluster config:
	{Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:43:59.526904    6872 out.go:177] * Starting "functional-734700" primary control-plane node in "functional-734700" cluster
	I0923 10:43:59.529567    6872 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:43:59.532906    6872 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:43:59.534066    6872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:43:59.534066    6872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:43:59.535164    6872 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 10:43:59.535223    6872 cache.go:56] Caching tarball of preloaded images
	I0923 10:43:59.535424    6872 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 10:43:59.535424    6872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 10:43:59.536035    6872 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\config.json ...
	I0923 10:43:59.670681    6872 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 10:43:59.670681    6872 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 10:43:59.670681    6872 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:43:59.670681    6872 start.go:360] acquireMachinesLock for functional-734700: {Name:mkfc2a526621093e8c8bb6ee6969e9f0a53439f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:43:59.671321    6872 start.go:364] duration metric: took 107.8µs to acquireMachinesLock for "functional-734700"
	I0923 10:43:59.671597    6872 start.go:96] Skipping create...Using existing machine configuration
	I0923 10:43:59.671643    6872 fix.go:54] fixHost starting: 
	I0923 10:43:59.691984    6872 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
	I0923 10:43:59.775393    6872 fix.go:112] recreateIfNeeded on functional-734700: state=Running err=<nil>
	W0923 10:43:59.775452    6872 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 10:43:59.779224    6872 out.go:177] * Updating the running docker "functional-734700" container ...
	I0923 10:43:59.782034    6872 machine.go:93] provisionDockerMachine start ...
	I0923 10:43:59.791332    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:43:59.875378    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:43:59.876498    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:43:59.876549    6872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:44:00.054213    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-734700
	
	I0923 10:44:00.054213    6872 ubuntu.go:169] provisioning hostname "functional-734700"
	I0923 10:44:00.065035    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:00.140176    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:44:00.141172    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:44:00.141172    6872 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-734700 && echo "functional-734700" | sudo tee /etc/hostname
	I0923 10:44:00.346099    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-734700
	
	I0923 10:44:00.354467    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:00.433084    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:44:00.434005    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:44:00.434005    6872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-734700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-734700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-734700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:44:00.616830    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:44:00.616830    6872 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0923 10:44:00.616830    6872 ubuntu.go:177] setting up certificates
	I0923 10:44:00.616830    6872 provision.go:84] configureAuth start
	I0923 10:44:00.628229    6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-734700
	I0923 10:44:00.703173    6872 provision.go:143] copyHostCerts
	I0923 10:44:00.703173    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I0923 10:44:00.703173    6872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0923 10:44:00.703742    6872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0923 10:44:00.704276    6872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 10:44:00.705435    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I0923 10:44:00.705492    6872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0923 10:44:00.705492    6872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0923 10:44:00.705492    6872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 10:44:00.706552    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I0923 10:44:00.707124    6872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0923 10:44:00.707124    6872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0923 10:44:00.707124    6872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0923 10:44:00.707969    6872 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-734700 san=[127.0.0.1 192.168.49.2 functional-734700 localhost minikube]
	I0923 10:44:00.868480    6872 provision.go:177] copyRemoteCerts
	I0923 10:44:00.880090    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:44:00.890136    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:00.961300    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:01.097217    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0923 10:44:01.097606    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:44:01.144810    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0923 10:44:01.145573    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 10:44:01.188581    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0923 10:44:01.188581    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 10:44:01.231408    6872 provision.go:87] duration metric: took 614.549ms to configureAuth
	I0923 10:44:01.231408    6872 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:44:01.232193    6872 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:44:01.244086    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:01.320835    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:44:01.321197    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:44:01.321197    6872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 10:44:01.508421    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 10:44:01.508421    6872 ubuntu.go:71] root file system type: overlay
	I0923 10:44:01.508421    6872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 10:44:01.518229    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:01.599259    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:44:01.599799    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:44:01.599923    6872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 10:44:01.807070    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 10:44:01.819226    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:01.895305    6872 main.go:141] libmachine: Using SSH client type: native
	I0923 10:44:01.896326    6872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 57731 <nil> <nil>}
	I0923 10:44:01.896371    6872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 10:44:02.094063    6872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:44:02.094063    6872 machine.go:96] duration metric: took 2.3119202s to provisionDockerMachine
	I0923 10:44:02.094063    6872 start.go:293] postStartSetup for "functional-734700" (driver="docker")
	I0923 10:44:02.094063    6872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:44:02.108670    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:44:02.116120    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:02.186673    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:02.324462    6872 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:44:02.335212    6872 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.5 LTS"
	I0923 10:44:02.335212    6872 command_runner.go:130] > NAME="Ubuntu"
	I0923 10:44:02.335212    6872 command_runner.go:130] > VERSION_ID="22.04"
	I0923 10:44:02.335212    6872 command_runner.go:130] > VERSION="22.04.5 LTS (Jammy Jellyfish)"
	I0923 10:44:02.335212    6872 command_runner.go:130] > VERSION_CODENAME=jammy
	I0923 10:44:02.335212    6872 command_runner.go:130] > ID=ubuntu
	I0923 10:44:02.335212    6872 command_runner.go:130] > ID_LIKE=debian
	I0923 10:44:02.335212    6872 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0923 10:44:02.335212    6872 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0923 10:44:02.335212    6872 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0923 10:44:02.335212    6872 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0923 10:44:02.335212    6872 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0923 10:44:02.335212    6872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:44:02.335212    6872 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:44:02.336198    6872 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:44:02.336198    6872 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:44:02.336198    6872 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0923 10:44:02.336198    6872 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0923 10:44:02.337301    6872 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem -> 43162.pem in /etc/ssl/certs
	I0923 10:44:02.337301    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem -> /etc/ssl/certs/43162.pem
	I0923 10:44:02.338414    6872 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4316\hosts -> hosts in /etc/test/nested/copy/4316
	I0923 10:44:02.338414    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4316\hosts -> /etc/test/nested/copy/4316/hosts
	I0923 10:44:02.352892    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4316
	I0923 10:44:02.376864    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem --> /etc/ssl/certs/43162.pem (1708 bytes)
	I0923 10:44:02.422202    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4316\hosts --> /etc/test/nested/copy/4316/hosts (40 bytes)
	I0923 10:44:02.471047    6872 start.go:296] duration metric: took 376.9658ms for postStartSetup
	I0923 10:44:02.482828    6872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:44:02.490527    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:02.566520    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:02.687844    6872 command_runner.go:130] > 1%
	I0923 10:44:02.699874    6872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:44:02.714715    6872 command_runner.go:130] > 951G
	I0923 10:44:02.714715    6872 fix.go:56] duration metric: took 3.0429741s for fixHost
	I0923 10:44:02.714715    6872 start.go:83] releasing machines lock for "functional-734700", held for 3.04318s
	I0923 10:44:02.726512    6872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-734700
	I0923 10:44:02.801469    6872 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 10:44:02.812127    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:02.812746    6872 ssh_runner.go:195] Run: cat /version.json
	I0923 10:44:02.821513    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:02.892716    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:02.900979    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:03.014013    6872 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W0923 10:44:03.014013    6872 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 10:44:03.024792    6872 command_runner.go:130] > {"iso_version": "v1.34.0-1726481713-19649", "kicbase_version": "v0.0.45-1726784731-19672", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0923 10:44:03.037011    6872 ssh_runner.go:195] Run: systemctl --version
	I0923 10:44:03.049527    6872 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0923 10:44:03.049573    6872 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0923 10:44:03.062013    6872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:44:03.076439    6872 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0923 10:44:03.076439    6872 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0923 10:44:03.076439    6872 command_runner.go:130] > Device: 8ah/138d	Inode: 227         Links: 1
	I0923 10:44:03.076439    6872 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 10:44:03.076439    6872 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0923 10:44:03.076439    6872 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0923 10:44:03.076439    6872 command_runner.go:130] > Change: 2024-09-23 10:21:56.674660336 +0000
	I0923 10:44:03.076439    6872 command_runner.go:130] >  Birth: 2024-09-23 10:21:56.674660336 +0000
	I0923 10:44:03.090539    6872 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 10:44:03.109113    6872 command_runner.go:130] ! find: '\\etc\\cni\\net.d': No such file or directory
	W0923 10:44:03.110662    6872 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 10:44:03.122928    6872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 10:44:03.128286    6872 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 10:44:03.128286    6872 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 10:44:03.145114    6872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 10:44:03.145198    6872 start.go:495] detecting cgroup driver to use...
	I0923 10:44:03.145351    6872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:44:03.145366    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:44:03.177981    6872 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0923 10:44:03.192432    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 10:44:03.229959    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 10:44:03.251093    6872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 10:44:03.262401    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 10:44:03.299407    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:44:03.334819    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 10:44:03.368285    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 10:44:03.409977    6872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:44:03.442074    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 10:44:03.472546    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 10:44:03.509899    6872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 10:44:03.548331    6872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:44:03.570469    6872 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0923 10:44:03.582833    6872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:44:03.617800    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:03.799881    6872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 10:44:14.414592    6872 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.6141817s)
	I0923 10:44:14.414684    6872 start.go:495] detecting cgroup driver to use...
	I0923 10:44:14.414684    6872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:44:14.428297    6872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 10:44:14.454371    6872 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0923 10:44:14.454371    6872 command_runner.go:130] > [Unit]
	I0923 10:44:14.454371    6872 command_runner.go:130] > Description=Docker Application Container Engine
	I0923 10:44:14.454371    6872 command_runner.go:130] > Documentation=https://docs.docker.com
	I0923 10:44:14.454371    6872 command_runner.go:130] > BindsTo=containerd.service
	I0923 10:44:14.454371    6872 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0923 10:44:14.454371    6872 command_runner.go:130] > Wants=network-online.target
	I0923 10:44:14.454371    6872 command_runner.go:130] > Requires=docker.socket
	I0923 10:44:14.454371    6872 command_runner.go:130] > StartLimitBurst=3
	I0923 10:44:14.454371    6872 command_runner.go:130] > StartLimitIntervalSec=60
	I0923 10:44:14.454371    6872 command_runner.go:130] > [Service]
	I0923 10:44:14.454371    6872 command_runner.go:130] > Type=notify
	I0923 10:44:14.454371    6872 command_runner.go:130] > Restart=on-failure
	I0923 10:44:14.454371    6872 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0923 10:44:14.454371    6872 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0923 10:44:14.454371    6872 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0923 10:44:14.454371    6872 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0923 10:44:14.454371    6872 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0923 10:44:14.454371    6872 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0923 10:44:14.454371    6872 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0923 10:44:14.454371    6872 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0923 10:44:14.454371    6872 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0923 10:44:14.454371    6872 command_runner.go:130] > ExecStart=
	I0923 10:44:14.454371    6872 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0923 10:44:14.454371    6872 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0923 10:44:14.454371    6872 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0923 10:44:14.454371    6872 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0923 10:44:14.454371    6872 command_runner.go:130] > LimitNOFILE=infinity
	I0923 10:44:14.454371    6872 command_runner.go:130] > LimitNPROC=infinity
	I0923 10:44:14.454371    6872 command_runner.go:130] > LimitCORE=infinity
	I0923 10:44:14.454921    6872 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0923 10:44:14.454921    6872 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0923 10:44:14.454921    6872 command_runner.go:130] > TasksMax=infinity
	I0923 10:44:14.454921    6872 command_runner.go:130] > TimeoutStartSec=0
	I0923 10:44:14.454921    6872 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0923 10:44:14.454921    6872 command_runner.go:130] > Delegate=yes
	I0923 10:44:14.454921    6872 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0923 10:44:14.454921    6872 command_runner.go:130] > KillMode=process
	I0923 10:44:14.454921    6872 command_runner.go:130] > [Install]
	I0923 10:44:14.454921    6872 command_runner.go:130] > WantedBy=multi-user.target
	I0923 10:44:14.455066    6872 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 10:44:14.466299    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 10:44:14.497280    6872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:44:14.531264    6872 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0923 10:44:14.544671    6872 ssh_runner.go:195] Run: which cri-dockerd
	I0923 10:44:14.555309    6872 command_runner.go:130] > /usr/bin/cri-dockerd
	I0923 10:44:14.567583    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 10:44:14.598390    6872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 10:44:14.650277    6872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 10:44:14.856102    6872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 10:44:15.025453    6872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 10:44:15.025563    6872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 10:44:15.077483    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:15.267631    6872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 10:44:16.127562    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 10:44:16.164540    6872 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0923 10:44:16.219653    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:44:16.254717    6872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 10:44:16.424821    6872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 10:44:16.588853    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:16.751892    6872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 10:44:16.797651    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 10:44:16.831513    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:16.986387    6872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 10:44:17.143170    6872 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 10:44:17.157738    6872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 10:44:17.171585    6872 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0923 10:44:17.171635    6872 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0923 10:44:17.171635    6872 command_runner.go:130] > Device: 93h/147d	Inode: 718         Links: 1
	I0923 10:44:17.171684    6872 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0923 10:44:17.171684    6872 command_runner.go:130] > Access: 2024-09-23 10:44:17.099529905 +0000
	I0923 10:44:17.171684    6872 command_runner.go:130] > Modify: 2024-09-23 10:44:16.999514152 +0000
	I0923 10:44:17.171723    6872 command_runner.go:130] > Change: 2024-09-23 10:44:16.999514152 +0000
	I0923 10:44:17.171723    6872 command_runner.go:130] >  Birth: -
	I0923 10:44:17.171723    6872 start.go:563] Will wait 60s for crictl version
	I0923 10:44:17.184767    6872 ssh_runner.go:195] Run: which crictl
	I0923 10:44:17.195582    6872 command_runner.go:130] > /usr/bin/crictl
	I0923 10:44:17.207399    6872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:44:17.278902    6872 command_runner.go:130] > Version:  0.1.0
	I0923 10:44:17.278945    6872 command_runner.go:130] > RuntimeName:  docker
	I0923 10:44:17.278945    6872 command_runner.go:130] > RuntimeVersion:  27.3.0
	I0923 10:44:17.278945    6872 command_runner.go:130] > RuntimeApiVersion:  v1
	I0923 10:44:17.278945    6872 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 10:44:17.288502    6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:44:17.348070    6872 command_runner.go:130] > 27.3.0
	I0923 10:44:17.357412    6872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 10:44:17.414827    6872 command_runner.go:130] > 27.3.0
	I0923 10:44:17.418038    6872 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 10:44:17.427387    6872 cli_runner.go:164] Run: docker exec -t functional-734700 dig +short host.docker.internal
	I0923 10:44:17.603533    6872 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 10:44:17.619066    6872 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 10:44:17.632413    6872 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I0923 10:44:17.641898    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:17.712699    6872 kubeadm.go:883] updating cluster {Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:44:17.712699    6872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:44:17.723531    6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:44:17.763317    6872 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 10:44:17.763502    6872 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 10:44:17.763540    6872 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 10:44:17.763571    6872 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 10:44:17.763614    6872 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 10:44:17.763614    6872 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 10:44:17.763690    6872 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 10:44:17.763741    6872 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:44:17.767783    6872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:44:17.767783    6872 docker.go:615] Images already preloaded, skipping extraction
	I0923 10:44:17.779564    6872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 10:44:17.825653    6872 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I0923 10:44:17.826339    6872 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0923 10:44:17.826339    6872 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:44:17.826339    6872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 10:44:17.826339    6872 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:44:17.826339    6872 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 docker true true} ...
	I0923 10:44:17.826861    6872 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-734700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:44:17.836073    6872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 10:44:17.918002    6872 command_runner.go:130] > cgroupfs
	I0923 10:44:17.921743    6872 cni.go:84] Creating CNI manager for ""
	I0923 10:44:17.921800    6872 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:44:17.921800    6872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:44:17.921800    6872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-734700 NodeName:functional-734700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:44:17.921800    6872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-734700"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:44:17.934859    6872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:44:17.955335    6872 command_runner.go:130] > kubeadm
	I0923 10:44:17.955335    6872 command_runner.go:130] > kubectl
	I0923 10:44:17.955335    6872 command_runner.go:130] > kubelet
	I0923 10:44:17.955335    6872 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:44:17.968348    6872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:44:17.988403    6872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0923 10:44:18.015929    6872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:44:18.052157    6872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0923 10:44:18.100022    6872 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:44:18.111284    6872 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0923 10:44:18.123596    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:18.288347    6872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:44:18.311961    6872 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700 for IP: 192.168.49.2
	I0923 10:44:18.312950    6872 certs.go:194] generating shared ca certs ...
	I0923 10:44:18.312950    6872 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:44:18.312950    6872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0923 10:44:18.313955    6872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0923 10:44:18.313955    6872 certs.go:256] generating profile certs ...
	I0923 10:44:18.313955    6872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\client.key
	I0923 10:44:18.313955    6872 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\apiserver.key.0eefe5e5
	I0923 10:44:18.314951    6872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\proxy-client.key
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0923 10:44:18.314951    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0923 10:44:18.315950    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0923 10:44:18.315950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316.pem (1338 bytes)
	W0923 10:44:18.315950    6872 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316_empty.pem, impossibly tiny 0 bytes
	I0923 10:44:18.315950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 10:44:18.316950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0923 10:44:18.316950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 10:44:18.316950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 10:44:18.316950    6872 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem (1708 bytes)
	I0923 10:44:18.317950    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem -> /usr/share/ca-certificates/43162.pem
	I0923 10:44:18.317950    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:44:18.317950    6872 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316.pem -> /usr/share/ca-certificates/4316.pem
	I0923 10:44:18.318950    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:44:18.367215    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:44:18.414982    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:44:18.464819    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 10:44:18.507612    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0923 10:44:18.552839    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:44:18.593548    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:44:18.641200    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-734700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:44:18.686433    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem --> /usr/share/ca-certificates/43162.pem (1708 bytes)
	I0923 10:44:18.728607    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:44:18.768294    6872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316.pem --> /usr/share/ca-certificates/4316.pem (1338 bytes)
	I0923 10:44:18.823574    6872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:44:18.873125    6872 ssh_runner.go:195] Run: openssl version
	I0923 10:44:18.888099    6872 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0923 10:44:18.902271    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43162.pem && ln -fs /usr/share/ca-certificates/43162.pem /etc/ssl/certs/43162.pem"
	I0923 10:44:18.938055    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43162.pem
	I0923 10:44:18.950059    6872 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 23 10:42 /usr/share/ca-certificates/43162.pem
	I0923 10:44:18.950346    6872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:42 /usr/share/ca-certificates/43162.pem
	I0923 10:44:18.961764    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43162.pem
	I0923 10:44:18.979537    6872 command_runner.go:130] > 3ec20f2e
	I0923 10:44:18.991041    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43162.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 10:44:19.022278    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:44:19.059079    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:44:19.070430    6872 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 23 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:44:19.071381    6872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:44:19.082874    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:44:19.097483    6872 command_runner.go:130] > b5213941
	I0923 10:44:19.110287    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:44:19.142750    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4316.pem && ln -fs /usr/share/ca-certificates/4316.pem /etc/ssl/certs/4316.pem"
	I0923 10:44:19.176864    6872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4316.pem
	I0923 10:44:19.188484    6872 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 23 10:42 /usr/share/ca-certificates/4316.pem
	I0923 10:44:19.188484    6872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:42 /usr/share/ca-certificates/4316.pem
	I0923 10:44:19.200925    6872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4316.pem
	I0923 10:44:19.213290    6872 command_runner.go:130] > 51391683
	I0923 10:44:19.224286    6872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4316.pem /etc/ssl/certs/51391683.0"
	I0923 10:44:19.254193    6872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:44:19.266923    6872 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:44:19.267051    6872 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0923 10:44:19.267051    6872 command_runner.go:130] > Device: 830h/2096d	Inode: 17051       Links: 1
	I0923 10:44:19.267051    6872 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0923 10:44:19.267051    6872 command_runner.go:130] > Access: 2024-09-23 10:43:08.165393442 +0000
	I0923 10:44:19.267051    6872 command_runner.go:130] > Modify: 2024-09-23 10:43:08.165393442 +0000
	I0923 10:44:19.267051    6872 command_runner.go:130] > Change: 2024-09-23 10:43:08.165393442 +0000
	I0923 10:44:19.267051    6872 command_runner.go:130] >  Birth: 2024-09-23 10:43:08.165393442 +0000
	I0923 10:44:19.278418    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 10:44:19.294242    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.304966    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 10:44:19.321541    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.332688    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 10:44:19.347350    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.358337    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 10:44:19.373584    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.386998    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 10:44:19.402652    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.414016    6872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 10:44:19.429175    6872 command_runner.go:130] > Certificate will not expire
	I0923 10:44:19.430097    6872 kubeadm.go:392] StartCluster: {Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:44:19.439133    6872 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 10:44:19.495365    6872 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:44:19.516219    6872 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0923 10:44:19.516257    6872 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0923 10:44:19.516257    6872 command_runner.go:130] > /var/lib/minikube/etcd:
	I0923 10:44:19.516303    6872 command_runner.go:130] > member
	I0923 10:44:19.516349    6872 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 10:44:19.516401    6872 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 10:44:19.527330    6872 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 10:44:19.546863    6872 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 10:44:19.555517    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:19.627572    6872 kubeconfig.go:125] found "functional-734700" server: "https://127.0.0.1:57730"
	I0923 10:44:19.629010    6872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:44:19.629290    6872 kapi.go:59] client config for functional-734700: &rest.Config{Host:"https://127.0.0.1:57730", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e3bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 10:44:19.631200    6872 cert_rotation.go:140] Starting client certificate rotation controller
	I0923 10:44:19.641615    6872 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 10:44:19.661162    6872 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0923 10:44:19.661162    6872 kubeadm.go:597] duration metric: took 144.7172ms to restartPrimaryControlPlane
	I0923 10:44:19.661162    6872 kubeadm.go:394] duration metric: took 231.0539ms to StartCluster
	I0923 10:44:19.661162    6872 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:44:19.661162    6872 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:44:19.661162    6872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:44:19.664551    6872 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 10:44:19.664623    6872 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 10:44:19.664821    6872 addons.go:69] Setting storage-provisioner=true in profile "functional-734700"
	I0923 10:44:19.664821    6872 addons.go:69] Setting default-storageclass=true in profile "functional-734700"
	I0923 10:44:19.664821    6872 addons.go:234] Setting addon storage-provisioner=true in "functional-734700"
	W0923 10:44:19.664903    6872 addons.go:243] addon storage-provisioner should already be in state true
	I0923 10:44:19.664903    6872 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-734700"
	I0923 10:44:19.665004    6872 host.go:66] Checking if "functional-734700" exists ...
	I0923 10:44:19.665004    6872 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:44:19.669427    6872 out.go:177] * Verifying Kubernetes components...
	I0923 10:44:19.684756    6872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:44:19.684756    6872 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
	I0923 10:44:19.691880    6872 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
	I0923 10:44:19.757160    6872 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:44:19.758118    6872 kapi.go:59] client config for functional-734700: &rest.Config{Host:"https://127.0.0.1:57730", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1e3bc40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0923 10:44:19.758790    6872 addons.go:234] Setting addon default-storageclass=true in "functional-734700"
	W0923 10:44:19.758790    6872 addons.go:243] addon default-storageclass should already be in state true
	I0923 10:44:19.758790    6872 host.go:66] Checking if "functional-734700" exists ...
	I0923 10:44:19.763259    6872 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 10:44:19.765345    6872 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:19.765345    6872 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:44:19.775760    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:19.781901    6872 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
	I0923 10:44:19.870158    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:19.871383    6872 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:19.871383    6872 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:44:19.875776    6872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:44:19.880741    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:19.916516    6872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-734700
	I0923 10:44:19.952446    6872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
	I0923 10:44:19.981458    6872 node_ready.go:35] waiting up to 6m0s for node "functional-734700" to be "Ready" ...
	I0923 10:44:19.982477    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:19.982477    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:19.982477    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:19.982477    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:19.984439    6872 round_trippers.go:574] Response Status:  in 1 milliseconds
	I0923 10:44:19.984439    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:20.033987    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:20.122642    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:20.150727    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:20.156680    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.156680    6872 retry.go:31] will retry after 348.78761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.219135    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:20.224171    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.224223    6872 retry.go:31] will retry after 249.287254ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.486504    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:20.518840    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:20.593106    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:20.598526    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.598526    6872 retry.go:31] will retry after 210.521071ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.630268    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:20.636144    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.636144    6872 retry.go:31] will retry after 369.327049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.821325    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:20.928217    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:20.932223    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.932223    6872 retry.go:31] will retry after 555.852319ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:20.985228    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 1 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:20.985228    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:20.985228    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:20.985228    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:20.985228    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:20.988226    6872 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0923 10:44:20.988226    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:21.017517    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:21.109883    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:21.115429    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.115429    6872 retry.go:31] will retry after 639.87566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.499069    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:21.591196    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:21.596945    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.596945    6872 retry.go:31] will retry after 863.991188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.769160    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:21.861936    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:21.869554    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.869554    6872 retry.go:31] will retry after 1.128430171s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:21.988964    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 2 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:21.989365    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:21.989422    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:21.989422    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:21.989422    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:21.992263    6872 round_trippers.go:574] Response Status:  in 2 milliseconds
	I0923 10:44:21.992263    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:22.473557    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:22.574922    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:22.579696    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:22.579862    6872 retry.go:31] will retry after 886.537413ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:22.992606    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 3 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:22.992606    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:22.992606    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:22.992606    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:22.992606    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:22.997032    6872 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0923 10:44:22.997032    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:23.010475    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:23.122356    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:23.133175    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:23.133175    6872 retry.go:31] will retry after 1.545929173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:23.481856    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:23.574948    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:23.578648    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:23.578648    6872 retry.go:31] will retry after 1.203171659s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:23.998997    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 4 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:23.999239    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:23.999239    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:23.999239    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:23.999239    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:24.003228    6872 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0923 10:44:24.003228    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:24.690446    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:24.795534    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:25.003685    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 5 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:25.003685    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:25.003685    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:25.003685    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:25.003685    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:25.008675    6872 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0923 10:44:25.008807    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:25.299144    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:25.309351    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:25.309351    6872 retry.go:31] will retry after 1.446501007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:25.595674    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:25.603870    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:25.603948    6872 retry.go:31] will retry after 2.824462265s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:26.009587    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 6 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:26.009839    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:26.009917    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:26.009971    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:26.010033    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:26.014274    6872 round_trippers.go:574] Response Status:  in 4 milliseconds
	I0923 10:44:26.014444    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:26.767854    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:27.014905    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 7 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:27.014905    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:27.014905    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:27.014905    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:27.014905    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:27.018279    6872 round_trippers.go:574] Response Status:  in 3 milliseconds
	I0923 10:44:27.018493    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:27.503926    6872 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0923 10:44:27.503926    6872 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:27.503926    6872 retry.go:31] will retry after 3.328092917s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0923 10:44:28.018720    6872 with_retry.go:234] Got a Retry-After 1s response for attempt 8 to https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:28.018720    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:28.018720    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:28.018720    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:28.018720    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:28.443806    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:44:30.849435    6872 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:44:31.401277    6872 round_trippers.go:574] Response Status: 200 OK in 3382 milliseconds
	I0923 10:44:31.401277    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:31.401277    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:31.401277    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:31.401277    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0923 10:44:31.401277    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0923 10:44:31.401277    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:31 GMT
	I0923 10:44:31.401277    6872 round_trippers.go:580]     Audit-Id: 6d1d41fe-cecf-4a79-82d8-7af44ba2b2b8
	I0923 10:44:31.401610    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:31.403954    6872 node_ready.go:49] node "functional-734700" has status "Ready":"True"
	I0923 10:44:31.404047    6872 node_ready.go:38] duration metric: took 11.4219563s for node "functional-734700" to be "Ready" ...
	I0923 10:44:31.404047    6872 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:44:31.404288    6872 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0923 10:44:31.404510    6872 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0923 10:44:31.404704    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods
	I0923 10:44:31.404801    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:31.404801    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:31.404914    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:31.603353    6872 round_trippers.go:574] Response Status: 200 OK in 198 milliseconds
	I0923 10:44:31.603400    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:31.603400    6872 round_trippers.go:580]     Audit-Id: 64aca08d-ff0b-40c0-b060-9836e8d8ad34
	I0923 10:44:31.603400    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:31.603400    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:31.603400    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0923 10:44:31.603549    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0923 10:44:31.603582    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:31 GMT
	I0923 10:44:31.610826    6872 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-mx6qw","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"93c1f293-a585-415f-97d9-77def36eec58","resourceVersion":"421","creationTimestamp":"2024-09-23T10:43:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"28e26013-7ea2-4f52-b2c9-aaeb7687566e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28e26013-7ea2-4f52-b2c9-aaeb7687566e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51425 chars]
	I0923 10:44:31.618993    6872 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mx6qw" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:31.619780    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mx6qw
	I0923 10:44:31.619780    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:31.619780    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:31.619780    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:31.702530    6872 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0923 10:44:31.702530    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:31.702530    6872 round_trippers.go:580]     Audit-Id: 9c457256-321b-43d9-bdfb-db5269e12e1f
	I0923 10:44:31.702530    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:31.702530    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:31.702638    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:31.702638    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:31.702638    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:31 GMT
	I0923 10:44:31.702979    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-mx6qw","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"93c1f293-a585-415f-97d9-77def36eec58","resourceVersion":"421","creationTimestamp":"2024-09-23T10:43:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"28e26013-7ea2-4f52-b2c9-aaeb7687566e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28e26013-7ea2-4f52-b2c9-aaeb7687566e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6495 chars]
	I0923 10:44:31.704149    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:31.704212    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:31.704212    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:31.704212    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:31.907011    6872 round_trippers.go:574] Response Status: 200 OK in 202 milliseconds
	I0923 10:44:31.907228    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:31.907228    6872 round_trippers.go:580]     Audit-Id: 96491c51-2293-46ca-92da-dd127775e804
	I0923 10:44:31.907286    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:31.907286    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:31.907286    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:31.907286    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:31.907286    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:31 GMT
	I0923 10:44:31.907615    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:31.908450    6872 pod_ready.go:93] pod "coredns-7c65d6cfc9-mx6qw" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:31.908450    6872 pod_ready.go:82] duration metric: took 288.6562ms for pod "coredns-7c65d6cfc9-mx6qw" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:31.908450    6872 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:31.908450    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/etcd-functional-734700
	I0923 10:44:31.908450    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:31.908450    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:31.908450    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.022549    6872 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0923 10:44:32.031413    6872 round_trippers.go:574] Response Status: 200 OK in 122 milliseconds
	I0923 10:44:32.031413    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.031413    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.031413    6872 round_trippers.go:580]     Audit-Id: dec3624c-e49e-44f5-b9f1-8f8977d3ff20
	I0923 10:44:32.031413    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.031413    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.031413    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.031413    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.032155    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-734700","namespace":"kube-system","uid":"0969955e-97ba-4756-b168-a3321b1eaf73","resourceVersion":"354","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"663bcde97e61adf67c0d9b9636b993c2","kubernetes.io/config.mirror":"663bcde97e61adf67c0d9b9636b993c2","kubernetes.io/config.seen":"2024-09-23T10:43:21.706063793Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6459 chars]
	I0923 10:44:32.032866    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:32.032866    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.032866    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.032866    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.093977    6872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.6499979s)
	I0923 10:44:32.094430    6872 round_trippers.go:463] GET https://127.0.0.1:57730/apis/storage.k8s.io/v1/storageclasses
	I0923 10:44:32.094430    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.094430    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.094430    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.097288    6872 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I0923 10:44:32.097288    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.097372    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.097372    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.097372    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.097372    6872 round_trippers.go:580]     Audit-Id: fb7512d9-8398-40cc-b505-85f0ffba440c
	I0923 10:44:32.097372    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.097372    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.097920    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:32.098855    6872 pod_ready.go:93] pod "etcd-functional-734700" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:32.098911    6872 pod_ready.go:82] duration metric: took 190.4521ms for pod "etcd-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:32.099007    6872 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:32.099007    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-734700
	I0923 10:44:32.099191    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.099241    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.099241    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.105799    6872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 10:44:32.105862    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.105862    6872 round_trippers.go:580]     Audit-Id: 7c2e6b6e-daf2-4a64-91d1-ecb11268085f
	I0923 10:44:32.105912    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.105912    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.105984    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.105984    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.105984    6872 round_trippers.go:580]     Content-Length: 1273
	I0923 10:44:32.105984    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.107055    6872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 10:44:32.107096    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.107096    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.107096    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.107096    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.107197    6872 round_trippers.go:580]     Audit-Id: b090d4b3-04da-49af-9935-592720ac7d99
	I0923 10:44:32.107096    6872 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"standard","uid":"c919e7c8-f6b3-434a-84dc-e316b82807f4","resourceVersion":"342","creationTimestamp":"2024-09-23T10:43:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T10:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0923 10:44:32.107197    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.107392    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.107392    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-734700","namespace":"kube-system","uid":"8b5cdbe4-c503-49e4-8f42-a1296d3edbfc","resourceVersion":"351","creationTimestamp":"2024-09-23T10:43:20Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"623ec1abd24f14e4a5a9c10bf7ecadf1","kubernetes.io/config.mirror":"623ec1abd24f14e4a5a9c10bf7ecadf1","kubernetes.io/config.seen":"2024-09-23T10:43:12.357560624Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8535 chars]
	I0923 10:44:32.108043    6872 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c919e7c8-f6b3-434a-84dc-e316b82807f4","resourceVersion":"342","creationTimestamp":"2024-09-23T10:43:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T10:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 10:44:32.108043    6872 round_trippers.go:463] PUT https://127.0.0.1:57730/apis/storage.k8s.io/v1/storageclasses/standard
	I0923 10:44:32.108043    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.108043    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.108043    6872 round_trippers.go:473]     Content-Type: application/json
	I0923 10:44:32.108043    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.108738    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:32.108738    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.108738    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.108738    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.113184    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:32.113184    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.113184    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.113184    6872 round_trippers.go:580]     Audit-Id: cd4e72dd-1b71-4632-87fd-ed1d98ba7829
	I0923 10:44:32.113184    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.113184    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.113184    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.113184    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.113996    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:32.113996    6872 pod_ready.go:93] pod "kube-apiserver-functional-734700" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:32.113996    6872 pod_ready.go:82] duration metric: took 14.988ms for pod "kube-apiserver-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:32.114534    6872 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:32.114623    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:32.114623    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.114623    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.114623    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.208815    6872 round_trippers.go:574] Response Status: 200 OK in 100 milliseconds
	I0923 10:44:32.208815    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Audit-Id: 7b4ce909-a514-4668-b7c0-389d9143a560
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.208815    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.208815    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Content-Length: 1220
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.208815    6872 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"c919e7c8-f6b3-434a-84dc-e316b82807f4","resourceVersion":"342","creationTimestamp":"2024-09-23T10:43:28Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-23T10:43:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0923 10:44:32.208815    6872 round_trippers.go:574] Response Status: 200 OK in 94 milliseconds
	I0923 10:44:32.208815    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Audit-Id: 204f7195-ba8f-47d7-aaa4-804df05b2848
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.208815    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.208815    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.208815    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.208815    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"435","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8532 chars]
	I0923 10:44:32.210569    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:32.210569    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.210650    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.210650    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.220948    6872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 10:44:32.220948    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.221022    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.221022    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.221022    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.221022    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.221022    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.221022    6872 round_trippers.go:580]     Audit-Id: 2bd057aa-eb71-4096-9a8b-3292935d9337
	I0923 10:44:32.222384    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:32.615336    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:32.615336    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.615336    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.615336    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.698700    6872 round_trippers.go:574] Response Status: 200 OK in 83 milliseconds
	I0923 10:44:32.698756    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.698756    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.698756    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.698756    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.698756    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.698756    6872 round_trippers.go:580]     Audit-Id: 2271cacd-79a8-45f3-bbb2-a65d158b3a95
	I0923 10:44:32.698756    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.699451    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"435","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8532 chars]
	I0923 10:44:32.700486    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:32.700636    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:32.700636    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:32.700636    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:32.720073    6872 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0923 10:44:32.720164    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:32.720164    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:32.720164    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:32.720164    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:32.720164    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:32 GMT
	I0923 10:44:32.720164    6872 round_trippers.go:580]     Audit-Id: 7f52c9a7-b039-442a-8028-ac042782bb63
	I0923 10:44:32.720164    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:32.720366    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:33.114860    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:33.114860    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:33.114860    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:33.114860    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:33.120027    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:33.120027    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:33.120027    6872 round_trippers.go:580]     Audit-Id: 090e0b54-c9de-4ba0-a6cb-db90709ac4d6
	I0923 10:44:33.120027    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:33.120027    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:33.120027    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:33.120027    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:33.120027    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:33 GMT
	I0923 10:44:33.121789    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:33.122581    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:33.122581    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:33.122581    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:33.122581    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:33.133421    6872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 10:44:33.133421    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:33.133421    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:33.133421    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:33 GMT
	I0923 10:44:33.133421    6872 round_trippers.go:580]     Audit-Id: debc0257-8875-4eae-ab94-508e18b5bbac
	I0923 10:44:33.133421    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:33.133421    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:33.133421    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:33.134140    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:33.498173    6872 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0923 10:44:33.498173    6872 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0923 10:44:33.498278    6872 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 10:44:33.498278    6872 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0923 10:44:33.498278    6872 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0923 10:44:33.498278    6872 command_runner.go:130] > pod/storage-provisioner configured
	I0923 10:44:33.498278    6872 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.6487181s)
	I0923 10:44:33.503145    6872 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0923 10:44:33.505233    6872 addons.go:510] duration metric: took 13.8400273s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0923 10:44:33.616548    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:33.616660    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:33.616660    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:33.616660    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:33.622128    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:33.622128    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:33.622128    6872 round_trippers.go:580]     Audit-Id: 1e35acd1-aa14-41ce-b728-c8eff2d388b3
	I0923 10:44:33.622128    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:33.622128    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:33.622128    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:33.622303    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:33.622303    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:33 GMT
	I0923 10:44:33.622303    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:33.623629    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:33.623655    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:33.623655    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:33.623655    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:33.629718    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:33.629718    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:33.629718    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:33.629718    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:33 GMT
	I0923 10:44:33.629718    6872 round_trippers.go:580]     Audit-Id: 651ba26a-dc9a-49fe-9ce8-fb2c00640a88
	I0923 10:44:33.629718    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:33.629718    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:33.629718    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:33.629718    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:34.116611    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:34.116691    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:34.116770    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:34.116770    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:34.121649    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:34.121649    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:34.121649    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:34.121649    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:34 GMT
	I0923 10:44:34.121649    6872 round_trippers.go:580]     Audit-Id: 556d86b4-f3bc-426b-bd70-91b1ec997db6
	I0923 10:44:34.121649    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:34.121649    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:34.121649    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:34.121649    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:34.121649    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:34.121649    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:34.121649    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:34.121649    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:34.130681    6872 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0923 10:44:34.130729    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:34.130729    6872 round_trippers.go:580]     Audit-Id: 63b7e139-534f-4912-878b-ade8f4ae1ff2
	I0923 10:44:34.130729    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:34.130729    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:34.130729    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:34.130729    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:34.130729    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:34 GMT
	I0923 10:44:34.130729    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:34.130729    6872 pod_ready.go:103] pod "kube-controller-manager-functional-734700" in "kube-system" namespace has status "Ready":"False"
	I0923 10:44:34.615530    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:34.615611    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:34.615611    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:34.615611    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:34.620401    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:34.620469    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:34.620469    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:34.620494    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:34 GMT
	I0923 10:44:34.620494    6872 round_trippers.go:580]     Audit-Id: 81e78711-4c38-4793-a31c-864f634569df
	I0923 10:44:34.620494    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:34.620494    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:34.620494    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:34.620840    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:34.621054    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:34.621054    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:34.621054    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:34.621054    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:34.627530    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:34.627530    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:34.627530    6872 round_trippers.go:580]     Audit-Id: 98720412-52b1-4f19-bbc2-a4cbab33eeab
	I0923 10:44:34.627530    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:34.627530    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:34.627530    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:34.627530    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:34.627530    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:34 GMT
	I0923 10:44:34.627530    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:35.115978    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:35.115978    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:35.115978    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:35.115978    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:35.127089    6872 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0923 10:44:35.127192    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:35.127192    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:35.127192    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:35.127192    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:35.127192    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:35 GMT
	I0923 10:44:35.127192    6872 round_trippers.go:580]     Audit-Id: 2bec321e-8512-4883-8f51-9b3f1791ac8c
	I0923 10:44:35.127192    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:35.127464    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:35.128154    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:35.128260    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:35.128260    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:35.128260    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:35.139287    6872 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0923 10:44:35.139287    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:35.139287    6872 round_trippers.go:580]     Audit-Id: ae9dc2f5-691c-4645-8288-18e798872799
	I0923 10:44:35.139287    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:35.139287    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:35.139377    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:35.139377    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:35.139377    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:35 GMT
	I0923 10:44:35.139460    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:35.616313    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:35.616398    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:35.616398    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:35.616398    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:35.622338    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:35.622338    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:35.622338    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:35 GMT
	I0923 10:44:35.622338    6872 round_trippers.go:580]     Audit-Id: 6728e4e8-8fb1-467f-8afa-f3edba58d2ac
	I0923 10:44:35.622338    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:35.622338    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:35.622462    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:35.622462    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:35.622549    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:35.623692    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:35.623796    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:35.623857    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:35.623857    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:35.630249    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:35.630249    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:35.630249    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:35 GMT
	I0923 10:44:35.630308    6872 round_trippers.go:580]     Audit-Id: c6be05a6-4068-4c40-96b6-0d87072b5d8f
	I0923 10:44:35.630308    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:35.630308    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:35.630308    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:35.630308    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:35.630366    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:36.115184    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:36.115184    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:36.115184    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:36.115184    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:36.122240    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:36.122240    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:36.122240    6872 round_trippers.go:580]     Audit-Id: 13ffd458-6f3b-49dd-b12b-ccee71c2b936
	I0923 10:44:36.122240    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:36.122240    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:36.122240    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:36.122240    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:36.122240    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:36 GMT
	I0923 10:44:36.122727    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:36.123333    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:36.123333    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:36.123333    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:36.123333    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:36.129316    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:36.129316    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:36.129316    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:36.129316    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:36.129316    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:36.129316    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:36.129316    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:36 GMT
	I0923 10:44:36.129316    6872 round_trippers.go:580]     Audit-Id: c3371aed-626c-4bb8-9005-089694be5a3e
	I0923 10:44:36.129316    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:36.615427    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:36.615427    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:36.615427    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:36.615427    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:36.621275    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:36.621275    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:36.621275    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:36 GMT
	I0923 10:44:36.621275    6872 round_trippers.go:580]     Audit-Id: f78862f4-ddf6-41ef-a57c-fe48553e6a6f
	I0923 10:44:36.621275    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:36.621275    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:36.621275    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:36.621275    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:36.621275    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:36.622248    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:36.622248    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:36.622248    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:36.622248    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:36.628029    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:36.628091    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:36.628091    6872 round_trippers.go:580]     Audit-Id: 611655bc-4d8a-452b-a463-bfe440899e36
	I0923 10:44:36.628091    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:36.628091    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:36.628152    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:36.628152    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:36.628188    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:36 GMT
	I0923 10:44:36.628218    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:36.628218    6872 pod_ready.go:103] pod "kube-controller-manager-functional-734700" in "kube-system" namespace has status "Ready":"False"
	I0923 10:44:37.115485    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:37.115815    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:37.115815    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:37.115815    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:37.120595    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:37.120595    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:37.120595    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:37.120595    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:37.120595    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:37 GMT
	I0923 10:44:37.120595    6872 round_trippers.go:580]     Audit-Id: 9e0ea317-af30-4331-942d-f5a58a790e50
	I0923 10:44:37.120595    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:37.120797    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:37.121133    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:37.121759    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:37.121850    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:37.121850    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:37.121884    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:37.128319    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:37.128319    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:37.128319    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:37 GMT
	I0923 10:44:37.128319    6872 round_trippers.go:580]     Audit-Id: 66800eb6-e49f-4e9e-ad8f-48893cf96256
	I0923 10:44:37.128319    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:37.128319    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:37.128319    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:37.128319    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:37.128960    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:37.615332    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:37.615332    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:37.615332    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:37.615332    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:37.622343    6872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 10:44:37.622343    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:37.622343    6872 round_trippers.go:580]     Audit-Id: 3a167fce-f13b-49f4-8f80-35377bca2b1c
	I0923 10:44:37.622343    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:37.622343    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:37.622343    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:37.622343    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:37.622343    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:37 GMT
	I0923 10:44:37.622891    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"444","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8577 chars]
	I0923 10:44:37.623055    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:37.623594    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:37.623594    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:37.623594    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:37.631076    6872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 10:44:37.631173    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:37.631173    6872 round_trippers.go:580]     Audit-Id: bbde5552-cd68-4d57-9ea9-bf1ac37381cd
	I0923 10:44:37.631173    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:37.631173    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:37.631218    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:37.631218    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:37.631218    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:37 GMT
	I0923 10:44:37.631218    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:38.115587    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:38.115663    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.115663    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.115663    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.122326    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:38.122326    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.122326    6872 round_trippers.go:580]     Audit-Id: 087d84cf-8fc1-4c4a-9974-da5eadabbb25
	I0923 10:44:38.122326    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.122326    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.122326    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.122326    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.122326    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.122971    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"535","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8576 chars]
	I0923 10:44:38.123649    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:38.123649    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.123649    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.123649    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.129469    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:38.129469    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.129535    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.129580    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.129580    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.129580    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.129580    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.129580    6872 round_trippers.go:580]     Audit-Id: ca44e2a4-b245-458f-989d-4a6b11d92bf4
	I0923 10:44:38.131577    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:38.615749    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700
	I0923 10:44:38.616319    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.616319    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.616319    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.621603    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:38.621603    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.621603    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.621603    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.621603    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.621603    6872 round_trippers.go:580]     Audit-Id: 3494a704-70b5-498a-9923-788d6e2a5414
	I0923 10:44:38.621603    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.621603    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.622150    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-734700","namespace":"kube-system","uid":"b3db7762-6768-4139-8da3-5e6560e2778e","resourceVersion":"536","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.mirror":"b124e61e83cbd59237a1d77eba2f0baf","kubernetes.io/config.seen":"2024-09-23T10:43:21.706074594Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8315 chars]
	I0923 10:44:38.622904    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:38.622972    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.623000    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.623000    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.638192    6872 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0923 10:44:38.638192    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.638192    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.638192    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.638192    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.638192    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.638192    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.638192    6872 round_trippers.go:580]     Audit-Id: b7e663f6-27fe-406a-adf4-9bde39ff5417
	I0923 10:44:38.638192    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:38.638192    6872 pod_ready.go:93] pod "kube-controller-manager-functional-734700" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:38.638739    6872 pod_ready.go:82] duration metric: took 6.5238956s for pod "kube-controller-manager-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:38.638739    6872 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nd2v2" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:38.638739    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-proxy-nd2v2
	I0923 10:44:38.638889    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.638889    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.638889    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.644882    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:38.644882    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.644882    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.644882    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.644882    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.644882    6872 round_trippers.go:580]     Audit-Id: eec896bc-456b-4c74-96a1-31691fa1664f
	I0923 10:44:38.644882    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.644882    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.645618    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nd2v2","generateName":"kube-proxy-","namespace":"kube-system","uid":"f96eec79-84df-47d8-a5d9-1fbfffa680a7","resourceVersion":"446","creationTimestamp":"2024-09-23T10:43:26Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"0a1002fb-f10a-42dc-843d-f51f8f6ac4a0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0a1002fb-f10a-42dc-843d-f51f8f6ac4a0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0923 10:44:38.646276    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:38.646318    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.646364    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.646364    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.649642    6872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:44:38.649642    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.649642    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.649642    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.649642    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.649642    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.649642    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.649642    6872 round_trippers.go:580]     Audit-Id: 1d006ec2-e310-4c9e-aa19-cc484e45b776
	I0923 10:44:38.649642    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:38.649642    6872 pod_ready.go:93] pod "kube-proxy-nd2v2" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:38.649642    6872 pod_ready.go:82] duration metric: took 10.9027ms for pod "kube-proxy-nd2v2" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:38.649642    6872 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:38.650661    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:38.650764    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.650764    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.650764    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.655087    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:38.655087    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.655087    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.655087    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.655087    6872 round_trippers.go:580]     Audit-Id: 9b03e0c4-086a-4ddd-81e2-3f617893e86e
	I0923 10:44:38.655087    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.655087    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.655087    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.655087    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:38.655797    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:38.655797    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:38.655797    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:38.655856    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:38.659772    6872 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0923 10:44:38.659772    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:38.659772    6872 round_trippers.go:580]     Audit-Id: 72bea7b7-abea-40e5-8d53-19968bfbe32c
	I0923 10:44:38.659772    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:38.659772    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:38.659772    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:38.659772    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:38.659772    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:38 GMT
	I0923 10:44:38.659772    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:39.150576    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:39.150576    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:39.150576    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:39.150576    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:39.156567    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:39.156567    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:39.156567    6872 round_trippers.go:580]     Audit-Id: 01bf35e7-12ff-492e-bc2a-4340e926ee50
	I0923 10:44:39.156567    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:39.156567    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:39.156567    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:39.156567    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:39.156567    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:39 GMT
	I0923 10:44:39.156567    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:39.157500    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:39.157532    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:39.157567    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:39.157567    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:39.164489    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:39.164489    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:39.164489    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:39 GMT
	I0923 10:44:39.164489    6872 round_trippers.go:580]     Audit-Id: e7d40862-5343-45b9-bcfe-3fdc844ee1a3
	I0923 10:44:39.164489    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:39.164489    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:39.164489    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:39.164489    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:39.164489    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:39.649919    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:39.649919    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:39.649919    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:39.649919    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:39.655848    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:39.655904    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:39.655930    6872 round_trippers.go:580]     Audit-Id: edbb8eed-a577-4b4b-b5e8-013356d125f0
	I0923 10:44:39.655930    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:39.655930    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:39.655930    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:39.655958    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:39.655958    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:39 GMT
	I0923 10:44:39.657129    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:39.657795    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:39.657859    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:39.657859    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:39.657859    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:39.663651    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:39.663651    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:39.663651    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:39.663651    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:39.663651    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:39 GMT
	I0923 10:44:39.663651    6872 round_trippers.go:580]     Audit-Id: 29a8f61a-b424-4d25-b8b9-d7e9f469d715
	I0923 10:44:39.663651    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:39.663651    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:39.664995    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:40.150632    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:40.150632    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:40.150632    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:40.150632    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:40.156280    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:40.156280    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:40.156280    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:40.156280    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:40.156280    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:40.156280    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:40.156280    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:40 GMT
	I0923 10:44:40.156280    6872 round_trippers.go:580]     Audit-Id: 67599b84-0f42-4fd7-bd6a-70519e79f456
	I0923 10:44:40.156280    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:40.157100    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:40.157176    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:40.157176    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:40.157176    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:40.163455    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:40.163455    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:40.163455    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:40.163455    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:40.163455    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:40.163455    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:40 GMT
	I0923 10:44:40.163455    6872 round_trippers.go:580]     Audit-Id: 7a42cba2-ae41-4dc5-ba18-49377a4ccb02
	I0923 10:44:40.163455    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:40.163455    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:40.650365    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:40.650365    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:40.650365    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:40.650365    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:40.655992    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:40.656022    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:40.656022    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:40.656022    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:40.656022    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:40.656022    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:40.656022    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:40 GMT
	I0923 10:44:40.656022    6872 round_trippers.go:580]     Audit-Id: 50fc7979-3bed-47ee-90b0-503b57bdaffb
	I0923 10:44:40.656022    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:40.656613    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:40.656613    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:40.656613    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:40.656613    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:40.662825    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:40.662851    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:40.662851    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:40 GMT
	I0923 10:44:40.662851    6872 round_trippers.go:580]     Audit-Id: e6a9a2f0-6036-4352-ae32-24c021e6a332
	I0923 10:44:40.662851    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:40.662851    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:40.662851    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:40.662851    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:40.662851    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:40.663398    6872 pod_ready.go:103] pod "kube-scheduler-functional-734700" in "kube-system" namespace has status "Ready":"False"
	I0923 10:44:41.150649    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:41.150649    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.150649    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.150649    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.157171    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:41.157171    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.157171    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.157171    6872 round_trippers.go:580]     Audit-Id: 5b9bd13e-e145-4611-8502-298017370757
	I0923 10:44:41.157171    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.157171    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.157171    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.157171    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.157171    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"443","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5441 chars]
	I0923 10:44:41.157932    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:41.157932    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.157932    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.157932    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.164589    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:41.164589    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.164589    6872 round_trippers.go:580]     Audit-Id: 8e6df2ba-8262-410f-b57d-1079b4ffbf0c
	I0923 10:44:41.164589    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.164589    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.164589    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.164589    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.164589    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.164809    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:41.650396    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700
	I0923 10:44:41.650396    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.650396    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.650396    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.656699    6872 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0923 10:44:41.656699    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.656767    6872 round_trippers.go:580]     Audit-Id: b327d003-7bbf-4847-8f7d-32396aa8b62a
	I0923 10:44:41.656791    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.656791    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.656814    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.656814    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.656814    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.657210    6872 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-734700","namespace":"kube-system","uid":"552935b0-7b97-44f1-9d44-3c98f30ff23e","resourceVersion":"542","creationTimestamp":"2024-09-23T10:43:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.mirror":"aebb65c4c0d3ac1bf535bb11209d59fd","kubernetes.io/config.seen":"2024-09-23T10:43:21.706076395Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I0923 10:44:41.657809    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes/functional-734700
	I0923 10:44:41.657809    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.657809    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.657809    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.663127    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:41.663162    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.663162    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.663223    6872 round_trippers.go:580]     Audit-Id: 74ec5fb8-33fa-45fb-9f86-90b736aeb24c
	I0923 10:44:41.663223    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.663223    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.663223    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.663223    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.663411    6872 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-09-23T10:43:18Z","fieldsType":"FieldsV1", [truncated 4854 chars]
	I0923 10:44:41.663847    6872 pod_ready.go:93] pod "kube-scheduler-functional-734700" in "kube-system" namespace has status "Ready":"True"
	I0923 10:44:41.663847    6872 pod_ready.go:82] duration metric: took 3.014063s for pod "kube-scheduler-functional-734700" in "kube-system" namespace to be "Ready" ...
	I0923 10:44:41.663847    6872 pod_ready.go:39] duration metric: took 10.2591954s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:44:41.663965    6872 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:44:41.676551    6872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:44:41.702123    6872 command_runner.go:130] > 6012
	I0923 10:44:41.702123    6872 api_server.go:72] duration metric: took 22.0364579s to wait for apiserver process to appear ...
	I0923 10:44:41.702123    6872 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:44:41.702123    6872 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57730/healthz ...
	I0923 10:44:41.712896    6872 api_server.go:279] https://127.0.0.1:57730/healthz returned 200:
	ok
	I0923 10:44:41.712896    6872 round_trippers.go:463] GET https://127.0.0.1:57730/version
	I0923 10:44:41.712896    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.712896    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.712896    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.717221    6872 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0923 10:44:41.717221    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.717274    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.717274    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.717274    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.717274    6872 round_trippers.go:580]     Content-Length: 263
	I0923 10:44:41.717274    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.717274    6872 round_trippers.go:580]     Audit-Id: 25f0999c-63d2-48a1-b76c-7c987291f15c
	I0923 10:44:41.717314    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.717340    6872 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0923 10:44:41.717340    6872 api_server.go:141] control plane version: v1.31.1
	I0923 10:44:41.717340    6872 api_server.go:131] duration metric: took 15.2157ms to wait for apiserver health ...
	I0923 10:44:41.717340    6872 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:44:41.717340    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods
	I0923 10:44:41.717340    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.717340    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.717340    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.723108    6872 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0923 10:44:41.723108    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.723108    6872 round_trippers.go:580]     Audit-Id: 40b0e3b9-c8de-42d6-8cfc-21b26e3cea30
	I0923 10:44:41.723108    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.723108    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.723108    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.723108    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.723108    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.724475    6872 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-mx6qw","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"93c1f293-a585-415f-97d9-77def36eec58","resourceVersion":"454","creationTimestamp":"2024-09-23T10:43:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"28e26013-7ea2-4f52-b2c9-aaeb7687566e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28e26013-7ea2-4f52-b2c9-aaeb7687566e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53264 chars]
	I0923 10:44:41.727568    6872 system_pods.go:59] 7 kube-system pods found
	I0923 10:44:41.727648    6872 system_pods.go:61] "coredns-7c65d6cfc9-mx6qw" [93c1f293-a585-415f-97d9-77def36eec58] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "etcd-functional-734700" [0969955e-97ba-4756-b168-a3321b1eaf73] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "kube-apiserver-functional-734700" [8b5cdbe4-c503-49e4-8f42-a1296d3edbfc] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "kube-controller-manager-functional-734700" [b3db7762-6768-4139-8da3-5e6560e2778e] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "kube-proxy-nd2v2" [f96eec79-84df-47d8-a5d9-1fbfffa680a7] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "kube-scheduler-functional-734700" [552935b0-7b97-44f1-9d44-3c98f30ff23e] Running
	I0923 10:44:41.727648    6872 system_pods.go:61] "storage-provisioner" [7c1fbec1-bfde-4344-83de-2f498ff7c38a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 10:44:41.727648    6872 system_pods.go:74] duration metric: took 10.3079ms to wait for pod list to return data ...
	I0923 10:44:41.727648    6872 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:44:41.727795    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/default/serviceaccounts
	I0923 10:44:41.727869    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.727869    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.727869    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.735690    6872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 10:44:41.735690    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.735690    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.735690    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.735690    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.735690    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.735792    6872 round_trippers.go:580]     Content-Length: 261
	I0923 10:44:41.735792    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.735792    6872 round_trippers.go:580]     Audit-Id: f8a95a33-88fe-4dea-b606-65fe1436198c
	I0923 10:44:41.735820    6872 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3046a22e-40a9-40ab-888f-9cb8ef5960db","resourceVersion":"310","creationTimestamp":"2024-09-23T10:43:26Z"}}]}
	I0923 10:44:41.735864    6872 default_sa.go:45] found service account: "default"
	I0923 10:44:41.735864    6872 default_sa.go:55] duration metric: took 8.1379ms for default service account to be created ...
	I0923 10:44:41.735864    6872 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:44:41.735864    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/namespaces/kube-system/pods
	I0923 10:44:41.735864    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.735864    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.735864    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.743261    6872 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0923 10:44:41.743370    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.743388    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.743388    6872 round_trippers.go:580]     Audit-Id: ea30063b-c476-40b0-842d-19d7544bbc4b
	I0923 10:44:41.743418    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.743418    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.743418    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.743418    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.744144    6872 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-mx6qw","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"93c1f293-a585-415f-97d9-77def36eec58","resourceVersion":"454","creationTimestamp":"2024-09-23T10:43:26Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"28e26013-7ea2-4f52-b2c9-aaeb7687566e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-23T10:43:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"28e26013-7ea2-4f52-b2c9-aaeb7687566e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53264 chars]
	I0923 10:44:41.746161    6872 system_pods.go:86] 7 kube-system pods found
	I0923 10:44:41.746161    6872 system_pods.go:89] "coredns-7c65d6cfc9-mx6qw" [93c1f293-a585-415f-97d9-77def36eec58] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "etcd-functional-734700" [0969955e-97ba-4756-b168-a3321b1eaf73] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "kube-apiserver-functional-734700" [8b5cdbe4-c503-49e4-8f42-a1296d3edbfc] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "kube-controller-manager-functional-734700" [b3db7762-6768-4139-8da3-5e6560e2778e] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "kube-proxy-nd2v2" [f96eec79-84df-47d8-a5d9-1fbfffa680a7] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "kube-scheduler-functional-734700" [552935b0-7b97-44f1-9d44-3c98f30ff23e] Running
	I0923 10:44:41.746161    6872 system_pods.go:89] "storage-provisioner" [7c1fbec1-bfde-4344-83de-2f498ff7c38a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0923 10:44:41.746161    6872 system_pods.go:126] duration metric: took 10.2972ms to wait for k8s-apps to be running ...
	I0923 10:44:41.746161    6872 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:44:41.758625    6872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:44:41.784304    6872 system_svc.go:56] duration metric: took 38.1414ms WaitForService to wait for kubelet
	I0923 10:44:41.784304    6872 kubeadm.go:582] duration metric: took 22.1186351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:44:41.784304    6872 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:44:41.784304    6872 round_trippers.go:463] GET https://127.0.0.1:57730/api/v1/nodes
	I0923 10:44:41.784304    6872 round_trippers.go:469] Request Headers:
	I0923 10:44:41.784304    6872 round_trippers.go:473]     Accept: application/json, */*
	I0923 10:44:41.784304    6872 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0923 10:44:41.793814    6872 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0923 10:44:41.793814    6872 round_trippers.go:577] Response Headers:
	I0923 10:44:41.793814    6872 round_trippers.go:580]     Date: Mon, 23 Sep 2024 10:44:41 GMT
	I0923 10:44:41.793814    6872 round_trippers.go:580]     Audit-Id: 99846012-9f87-4a3e-9b30-6a51d32d89f4
	I0923 10:44:41.793814    6872 round_trippers.go:580]     Cache-Control: no-cache, private
	I0923 10:44:41.793814    6872 round_trippers.go:580]     Content-Type: application/json
	I0923 10:44:41.793814    6872 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 42e00072-ddee-4e01-bb47-d9f3b96eab3a
	I0923 10:44:41.793814    6872 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: d0487bea-4daf-4484-bc68-09d11e3abd50
	I0923 10:44:41.793814    6872 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"functional-734700","uid":"f367df96-29c9-48ff-909c-110572f901d8","resourceVersion":"393","creationTimestamp":"2024-09-23T10:43:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-734700","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f69bf2f8ed9442c9c01edbe27466c5398c68b986","minikube.k8s.io/name":"functional-734700","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_23T10_43_22_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4907 chars]
	I0923 10:44:41.794646    6872 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I0923 10:44:41.794646    6872 node_conditions.go:123] node cpu capacity is 16
	I0923 10:44:41.794646    6872 node_conditions.go:105] duration metric: took 10.3414ms to run NodePressure ...
	I0923 10:44:41.794646    6872 start.go:241] waiting for startup goroutines ...
	I0923 10:44:41.794646    6872 start.go:246] waiting for cluster config update ...
	I0923 10:44:41.794646    6872 start.go:255] writing updated cluster config ...
	I0923 10:44:41.807199    6872 ssh_runner.go:195] Run: rm -f paused
	I0923 10:44:41.936322    6872 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:44:41.941136    6872 out.go:177] * Done! kubectl is now configured to use "functional-734700" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 10:44:16 functional-734700 systemd[1]: Started Docker Application Container Engine.
	Sep 23 10:44:16 functional-734700 cri-dockerd[1658]: time="2024-09-23T10:44:16Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-mx6qw_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b347af686dcf1d50bfc80567a3432f173a24cf54fe364731743a9e85e5dbf9e8\""
	Sep 23 10:44:16 functional-734700 systemd[1]: Stopping CRI Interface for Docker Application Container Engine...
	Sep 23 10:44:16 functional-734700 systemd[1]: cri-docker.service: Deactivated successfully.
	Sep 23 10:44:16 functional-734700 systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	Sep 23 10:44:17 functional-734700 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Start docker client with request timeout 0s"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Loaded network plugin cni"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Docker cri networking managed by network plugin cni"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Setting cgroupDriver cgroupfs"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Sep 23 10:44:17 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:17Z" level=info msg="Start cri-dockerd grpc backend"
	Sep 23 10:44:17 functional-734700 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5527c3c6b6fdab960055cb3f72568851c358d5dc3a4779cd45cd8cf375a61706/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/25a9c2c2d18d576fa2e68b72d311211ed3c622ea73de78248e49a5c82c4201c4/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/069f4df3def3e37bd4c5198c09fe6c32b05938e3323f75b423997e4ba840f4dd/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/567cdad08b1c019c719181f5db2b1e4848b21dfcacd052b87fd96ce4f6d9ff6a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d0eb56995fb6616e5c110fbb61bd24617200477d0d2d18c3dadb8cd7e79b6c5e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f93a686519e359b2bd3a7afd05d204d3f1e99c7329ddefb3be53ea09ce1a492a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:25 functional-734700 cri-dockerd[4938]: time="2024-09-23T10:44:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0123be89ee22abede9a302d2308339a99f87b251ea5d6c9112fa3d9a588fe2ac/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Sep 23 10:44:26 functional-734700 dockerd[4649]: time="2024-09-23T10:44:26.709165986Z" level=info msg="ignoring event" container=7d4530f2baf46d9712bb1398cf713961eb3ff6a17961dd05b51f586f525ae9e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6a0c0f06854d7       6e38f40d628db       16 seconds ago       Running             storage-provisioner       2                   25a9c2c2d18d5       storage-provisioner
	f2fe8eaf6bf40       c69fa2e9cbf5f       34 seconds ago       Running             coredns                   1                   0123be89ee22a       coredns-7c65d6cfc9-mx6qw
	084e0a14cf50b       175ffd71cce3d       35 seconds ago       Running             kube-controller-manager   1                   f93a686519e35       kube-controller-manager-functional-734700
	582adaf66955a       6bab7719df100       35 seconds ago       Running             kube-apiserver            1                   567cdad08b1c0       kube-apiserver-functional-734700
	ddd7ab19a831f       9aa1fad941575       35 seconds ago       Running             kube-scheduler            1                   d0eb56995fb66       kube-scheduler-functional-734700
	08bb40f60e522       60c005f310ff3       35 seconds ago       Running             kube-proxy                1                   069f4df3def3e       kube-proxy-nd2v2
	7d4530f2baf46       6e38f40d628db       35 seconds ago       Exited              storage-provisioner       1                   25a9c2c2d18d5       storage-provisioner
	45671b006dab9       2e96e5913fc06       35 seconds ago       Running             etcd                      1                   5527c3c6b6fda       etcd-functional-734700
	5085b04d22755       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   b347af686dcf1       coredns-7c65d6cfc9-mx6qw
	64666ba437713       60c005f310ff3       About a minute ago   Exited              kube-proxy                0                   f6938c7acc90f       kube-proxy-nd2v2
	af4fe9a742664       6bab7719df100       About a minute ago   Exited              kube-apiserver            0                   4e7bf32981538       kube-apiserver-functional-734700
	bbbbba61d9b7e       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   0d4b148db541b       kube-scheduler-functional-734700
	8e57a51ba2d85       175ffd71cce3d       About a minute ago   Exited              kube-controller-manager   0                   acc2f87715525       kube-controller-manager-functional-734700
	e7fbc452add48       2e96e5913fc06       About a minute ago   Exited              etcd                      0                   6fa1ee1a6f01a       etcd-functional-734700
	
	
	==> coredns [5085b04d2275] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[729165385]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 10:43:30.811) (total time: 21035ms):
	Trace[729165385]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21034ms (10:43:51.842)
	Trace[729165385]: [21.035367073s] [21.035367073s] END
	[INFO] plugin/kubernetes: Trace[751750754]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 10:43:30.811) (total time: 21035ms):
	Trace[751750754]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21034ms (10:43:51.842)
	Trace[751750754]: [21.035137436s] [21.035137436s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: Trace[1153749801]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 10:43:30.811) (total time: 21036ms):
	Trace[1153749801]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused 21034ms (10:43:51.842)
	Trace[1153749801]: [21.03609529s] [21.03609529s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2fe8eaf6bf4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44869 - 16843 "HINFO IN 4307839810358541069.6199623818884807864. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.099262525s
	
	
	==> describe nodes <==
	Name:               functional-734700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-734700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=functional-734700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_43_22_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:43:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-734700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:44:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:44:55 +0000   Mon, 23 Sep 2024 10:43:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:44:55 +0000   Mon, 23 Sep 2024 10:43:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:44:55 +0000   Mon, 23 Sep 2024 10:43:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:44:55 +0000   Mon, 23 Sep 2024 10:43:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-734700
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 327bd72838d0438e999f273db3f5949d
	  System UUID:                327bd72838d0438e999f273db3f5949d
	  Boot ID:                    d450b61c-b7f5-4a84-8b7a-3c24688adc16
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-mx6qw                     100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     94s
	  kube-system                 etcd-functional-734700                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kube-apiserver-functional-734700             250m (1%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-functional-734700    200m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-nd2v2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-functional-734700             100m (0%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (4%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                             Age   From             Message
	  ----     ------                             ----  ----             -------
	  Normal   Starting                           89s   kube-proxy       
	  Normal   Starting                           28s   kube-proxy       
	  Warning  PossibleMemoryBackedVolumesOnDisk  99s   kubelet          The tmpfs noswap option is not supported. Memory-backed volumes (e.g. secrets, emptyDirs, etc.) might be swapped to disk and should no longer be considered secure.
	  Normal   Starting                           99s   kubelet          Starting kubelet.
	  Warning  CgroupV1                           99s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced            98s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory            98s   kubelet          Node functional-734700 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure              98s   kubelet          Node functional-734700 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID               98s   kubelet          Node functional-734700 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode                     95s   node-controller  Node functional-734700 event: Registered Node functional-734700 in Controller
	  Normal   NodeNotReady                       47s   kubelet          Node functional-734700 status is now: NodeNotReady
	  Normal   RegisteredNode                     26s   node-controller  Node functional-734700 event: Registered Node functional-734700 in Controller
	
	
	==> dmesg <==
	[  +0.001558] FS-Cache: N-key=[10] '34323934393337363439'
	[  +0.014149] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.525136] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +1.748684] WSL (2) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002212] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.002792] WSL (1) ERROR: ConfigMountFsTab:2589: Processing fstab with mount -a failed.
	[  +0.005280] WSL (1) ERROR: ConfigApplyWindowsLibPath:2537: open /etc/ld.so.conf.d/ld.wsl.conf
	[  +0.000003]  failed 2
	[  +0.006699] WSL (3) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.001956] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.004810] WSL (4) ERROR: UtilCreateProcessAndWait:665: /bin/mount failed with 2
	[  +0.002117] WSL (1) ERROR: UtilCreateProcessAndWait:687: /bin/mount failed with status 0xff00
	
	[  +0.069743] WSL (1) WARNING: /usr/share/zoneinfo/Etc/UTC not found. Is the tzdata package installed?
	[  +0.111070] misc dxg: dxgk: dxgglobal_acquire_channel_lock: Failed to acquire global channel lock
	[  +0.964638] netlink: 'init': attribute type 4 has an invalid length.
	[Sep23 10:23] tmpfs: Unknown parameter 'noswap'
	[ +10.369228] tmpfs: Unknown parameter 'noswap'
	[Sep23 10:41] tmpfs: Unknown parameter 'noswap'
	[  +8.961666] tmpfs: Unknown parameter 'noswap'
	[ +14.355859] tmpfs: Unknown parameter 'noswap'
	[Sep23 10:43] tmpfs: Unknown parameter 'noswap'
	[  +9.289539] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [45671b006dab] <==
	{"level":"info","ts":"2024-09-23T10:44:27.297666Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-23T10:44:28.310485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-23T10:44:28.310598Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-23T10:44:28.310659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-23T10:44:28.310678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-23T10:44:28.310685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-23T10:44:28.310713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-23T10:44:28.310723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-23T10:44:28.314304Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-734700 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T10:44:28.314384Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:44:28.314540Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:44:28.316797Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:44:28.316952Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:44:28.318935Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:44:28.319557Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:44:28.320496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:44:28.320694Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T10:44:31.608154Z","caller":"traceutil/trace.go:171","msg":"trace[96404391] transaction","detail":"{read_only:false; number_of_response:1; response_revision:433; }","duration":"105.553509ms","start":"2024-09-23T10:44:31.502579Z","end":"2024-09-23T10:44:31.608133Z","steps":["trace[96404391] 'process raft request'  (duration: 104.878703ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:44:31.894556Z","caller":"traceutil/trace.go:171","msg":"trace[1330109898] transaction","detail":"{read_only:false; response_revision:434; number_of_response:1; }","duration":"100.220574ms","start":"2024-09-23T10:44:31.794306Z","end":"2024-09-23T10:44:31.894526Z","steps":["trace[1330109898] 'process raft request'  (duration: 100.00014ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:44:31.895093Z","caller":"traceutil/trace.go:171","msg":"trace[919987634] linearizableReadLoop","detail":"{readStateIndex:452; appliedIndex:452; }","duration":"100.566328ms","start":"2024-09-23T10:44:31.794514Z","end":"2024-09-23T10:44:31.895080Z","steps":["trace[919987634] 'read index received'  (duration: 100.562227ms)","trace[919987634] 'applied index is now lower than readState.Index'  (duration: 3.301µs)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:44:31.895222Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.654642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/\" range_end:\"/registry/masterleases0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:44:31.895286Z","caller":"traceutil/trace.go:171","msg":"trace[1273634372] range","detail":"{range_begin:/registry/masterleases/; range_end:/registry/masterleases0; response_count:0; response_revision:434; }","duration":"100.76516ms","start":"2024-09-23T10:44:31.794506Z","end":"2024-09-23T10:44:31.895272Z","steps":["trace[1273634372] 'agreement among raft nodes before linearized reading'  (duration: 100.632439ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:44:31.905557Z","caller":"traceutil/trace.go:171","msg":"trace[1345023807] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"104.692774ms","start":"2024-09-23T10:44:31.800846Z","end":"2024-09-23T10:44:31.905539Z","steps":["trace[1345023807] 'process raft request'  (duration: 104.252205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:44:31.905550Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.108239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/functional-734700\" ","response":"range_response_count:1 size:4443"}
	{"level":"info","ts":"2024-09-23T10:44:31.905858Z","caller":"traceutil/trace.go:171","msg":"trace[915168700] range","detail":"{range_begin:/registry/minions/functional-734700; range_end:; response_count:1; response_revision:435; }","duration":"105.459594ms","start":"2024-09-23T10:44:31.800386Z","end":"2024-09-23T10:44:31.905845Z","steps":["trace[915168700] 'agreement among raft nodes before linearized reading'  (duration: 105.007923ms)"],"step_count":1}
	
	
	==> etcd [e7fbc452add4] <==
	{"level":"info","ts":"2024-09-23T10:43:15.126354Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:43:15.126425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:43:15.126619Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T10:43:15.127154Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T10:43:15.127319Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T10:43:15.127444Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:43:15.128519Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T10:43:15.128569Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T10:43:15.131710Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:43:15.131892Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:43:15.132026Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T10:43:15.132164Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T10:43:28.909626Z","caller":"traceutil/trace.go:171","msg":"trace[877418237] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"100.036008ms","start":"2024-09-23T10:43:28.809569Z","end":"2024-09-23T10:43:28.909605Z","steps":["trace[877418237] 'process raft request'  (duration: 99.646946ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:43:29.128851Z","caller":"traceutil/trace.go:171","msg":"trace[2052139401] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"108.209617ms","start":"2024-09-23T10:43:29.020623Z","end":"2024-09-23T10:43:29.128832Z","steps":["trace[2052139401] 'process raft request'  (duration: 101.196294ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:43:29.330353Z","caller":"traceutil/trace.go:171","msg":"trace[545891064] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"103.506764ms","start":"2024-09-23T10:43:29.226821Z","end":"2024-09-23T10:43:29.330328Z","steps":["trace[545891064] 'process raft request'  (duration: 82.179851ms)","trace[545891064] 'compare'  (duration: 20.688711ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:44:03.842465Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-23T10:44:03.842619Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-734700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-23T10:44:03.842723Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:44:03.842814Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:44:03.905629Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-23T10:44:03.905701Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-23T10:44:03.905827Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-23T10:44:04.097865Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T10:44:04.098206Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-23T10:44:04.098367Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-734700","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:45:00 up 13:33,  0 users,  load average: 0.90, 1.17, 1.00
	Linux functional-734700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [582adaf66955] <==
	I0923 10:44:31.345495       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0923 10:44:31.345511       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0923 10:44:31.344681       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0923 10:44:31.344667       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0923 10:44:31.412930       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0923 10:44:31.413070       1 policy_source.go:224] refreshing policies
	I0923 10:44:31.493821       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0923 10:44:31.493871       1 aggregator.go:171] initial CRD sync complete...
	I0923 10:44:31.493883       1 autoregister_controller.go:144] Starting autoregister controller
	I0923 10:44:31.493897       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0923 10:44:31.494409       1 shared_informer.go:320] Caches are synced for configmaps
	I0923 10:44:31.593976       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0923 10:44:31.594176       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0923 10:44:31.594622       1 cache.go:39] Caches are synced for autoregister controller
	I0923 10:44:31.595425       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0923 10:44:31.595523       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0923 10:44:31.596764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0923 10:44:31.596781       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0923 10:44:31.597266       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0923 10:44:31.696144       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0923 10:44:31.806399       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0923 10:44:31.898440       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0923 10:44:32.409677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0923 10:44:34.959140       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0923 10:44:35.257636       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [af4fe9a74266] <==
	W0923 10:44:12.909463       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:12.972840       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:12.991063       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.003367       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.015995       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.065581       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.089095       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.093865       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.148746       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.214555       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.241481       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.316047       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.354370       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.488195       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.528758       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.534508       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.538362       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.569399       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.612008       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.692555       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.819010       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.913253       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.920589       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.922197       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0923 10:44:13.936730       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [084e0a14cf50] <==
	I0923 10:44:34.953963       1 shared_informer.go:320] Caches are synced for PVC protection
	I0923 10:44:34.954069       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0923 10:44:34.954118       1 shared_informer.go:320] Caches are synced for GC
	I0923 10:44:34.954590       1 shared_informer.go:320] Caches are synced for stateful set
	I0923 10:44:34.954690       1 shared_informer.go:320] Caches are synced for endpoint
	I0923 10:44:34.956073       1 shared_informer.go:320] Caches are synced for taint
	I0923 10:44:34.956377       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0923 10:44:34.956443       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-734700"
	I0923 10:44:34.956477       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0923 10:44:34.962846       1 shared_informer.go:320] Caches are synced for ephemeral
	I0923 10:44:34.966674       1 shared_informer.go:320] Caches are synced for job
	I0923 10:44:34.980775       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0923 10:44:35.011204       1 shared_informer.go:320] Caches are synced for PV protection
	I0923 10:44:35.023466       1 shared_informer.go:320] Caches are synced for attach detach
	I0923 10:44:35.140504       1 shared_informer.go:320] Caches are synced for persistent volume
	I0923 10:44:35.163346       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:44:35.172650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="223.049794ms"
	I0923 10:44:35.172877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.506µs"
	I0923 10:44:35.178026       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:44:35.601097       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 10:44:35.602666       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 10:44:35.602755       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 10:44:36.393429       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.828487ms"
	I0923 10:44:36.393599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="41.306µs"
	I0923 10:44:55.785875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-734700"
	
	
	==> kube-controller-manager [8e57a51ba2d8] <==
	I0923 10:43:26.002154       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0923 10:43:26.012241       1 shared_informer.go:320] Caches are synced for cronjob
	I0923 10:43:26.086880       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:43:26.116494       1 shared_informer.go:320] Caches are synced for resource quota
	I0923 10:43:26.501421       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 10:43:26.524587       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 10:43:26.524670       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0923 10:43:26.543673       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-734700"
	I0923 10:43:26.918317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="982.579941ms"
	I0923 10:43:26.958040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.645945ms"
	I0923 10:43:26.958258       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="82.113µs"
	I0923 10:43:26.958384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="59.41µs"
	I0923 10:43:26.958557       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.109µs"
	I0923 10:43:29.201965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="297.82006ms"
	I0923 10:43:29.219011       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.791687ms"
	I0923 10:43:29.219237       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.91µs"
	I0923 10:43:30.829431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.014µs"
	I0923 10:43:31.886950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="39.906µs"
	I0923 10:43:32.391351       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-734700"
	I0923 10:43:41.290908       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.812µs"
	I0923 10:43:42.176500       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.81µs"
	I0923 10:43:42.208133       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.418µs"
	I0923 10:43:42.212142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="89.214µs"
	I0923 10:43:56.407498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="20.340115ms"
	I0923 10:43:56.407733       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.212µs"
	
	
	==> kube-proxy [08bb40f60e52] <==
	E0923 10:44:27.194798       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 10:44:27.294014       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 10:44:27.396371       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:44:31.794815       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:44:31.795081       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:44:32.021119       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:44:32.021377       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:44:32.029073       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 10:44:32.094191       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 10:44:32.194434       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 10:44:32.194836       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:44:32.194892       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:44:32.198044       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:44:32.198166       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:44:32.198257       1 config.go:199] "Starting service config controller"
	I0923 10:44:32.198270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:44:32.198497       1 config.go:328] "Starting node config controller"
	I0923 10:44:32.198546       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:44:32.298361       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:44:32.298541       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:44:32.298603       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [64666ba43771] <==
	E0923 10:43:30.500722       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	E0923 10:43:30.521349       1 metrics.go:340] "failed to initialize nfacct client" err="nfacct sub-system not available"
	I0923 10:43:30.616628       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:43:31.051629       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:43:31.051814       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:43:31.100619       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:43:31.100763       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:43:31.105074       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	E0923 10:43:31.121585       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv4"
	E0923 10:43:31.140680       1 proxier.go:283] "Failed to create nfacct runner, nfacct based metrics won't be available" err="nfacct sub-system not available" ipFamily="IPv6"
	I0923 10:43:31.141233       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:43:31.141399       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:43:31.143986       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:43:31.144133       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:43:31.144135       1 config.go:199] "Starting service config controller"
	I0923 10:43:31.144151       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:43:31.145584       1 config.go:328] "Starting node config controller"
	I0923 10:43:31.145720       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:43:31.244529       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:43:31.244645       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 10:43:31.245968       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bbbbba61d9b7] <==
	E0923 10:43:19.567738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.574153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:43:19.574255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.602683       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 10:43:19.602785       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 10:43:19.622351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 10:43:19.622457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.628165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 10:43:19.628262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.688929       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:43:19.689043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.706449       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:43:19.706598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.718976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 10:43:19.719082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.782597       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:43:19.782697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.816070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:43:19.816174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:43:19.854887       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:43:19.855007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0923 10:43:22.212966       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:44:03.998383       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0923 10:44:03.998661       1 run.go:72] "command failed" err="finished without leader elect"
	I0923 10:44:03.998712       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	
	
	==> kube-scheduler [ddd7ab19a831] <==
	I0923 10:44:29.130798       1 serving.go:386] Generated self-signed cert in-memory
	W0923 10:44:31.400919       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 10:44:31.401121       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W0923 10:44:31.401287       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 10:44:31.401472       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 10:44:31.610910       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0923 10:44:31.611038       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:44:31.699178       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0923 10:44:31.699563       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0923 10:44:31.708474       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 10:44:31.699571       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0923 10:44:31.810419       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.119175    2591 status_manager.go:851] "Failed to get status for pod" podUID="93c1f293-a585-415f-97d9-77def36eec58" pod="kube-system/coredns-7c65d6cfc9-mx6qw" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mx6qw\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.120095    2591 status_manager.go:851] "Failed to get status for pod" podUID="7c1fbec1-bfde-4344-83de-2f498ff7c38a" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.120431    2591 status_manager.go:851] "Failed to get status for pod" podUID="663bcde97e61adf67c0d9b9636b993c2" pod="kube-system/etcd-functional-734700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-734700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.120722    2591 status_manager.go:851] "Failed to get status for pod" podUID="623ec1abd24f14e4a5a9c10bf7ecadf1" pod="kube-system/kube-apiserver-functional-734700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-734700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.121609    2591 status_manager.go:851] "Failed to get status for pod" podUID="f96eec79-84df-47d8-a5d9-1fbfffa680a7" pod="kube-system/kube-proxy-nd2v2" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-nd2v2\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.123328    2591 status_manager.go:851] "Failed to get status for pod" podUID="93c1f293-a585-415f-97d9-77def36eec58" pod="kube-system/coredns-7c65d6cfc9-mx6qw" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-mx6qw\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.123921    2591 status_manager.go:851] "Failed to get status for pod" podUID="7c1fbec1-bfde-4344-83de-2f498ff7c38a" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.124601    2591 status_manager.go:851] "Failed to get status for pod" podUID="aebb65c4c0d3ac1bf535bb11209d59fd" pod="kube-system/kube-scheduler-functional-734700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:24 functional-734700 kubelet[2591]: I0923 10:44:24.125295    2591 status_manager.go:851] "Failed to get status for pod" podUID="b124e61e83cbd59237a1d77eba2f0baf" pod="kube-system/kube-controller-manager-functional-734700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-734700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:25 functional-734700 kubelet[2591]: I0923 10:44:25.940101    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93a686519e359b2bd3a7afd05d204d3f1e99c7329ddefb3be53ea09ce1a492a"
	Sep 23 10:44:26 functional-734700 kubelet[2591]: I0923 10:44:26.004543    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0123be89ee22abede9a302d2308339a99f87b251ea5d6c9112fa3d9a588fe2ac"
	Sep 23 10:44:26 functional-734700 kubelet[2591]: I0923 10:44:26.024070    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="069f4df3def3e37bd4c5198c09fe6c32b05938e3323f75b423997e4ba840f4dd"
	Sep 23 10:44:26 functional-734700 kubelet[2591]: I0923 10:44:26.501950    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25a9c2c2d18d576fa2e68b72d311211ed3c622ea73de78248e49a5c82c4201c4"
	Sep 23 10:44:26 functional-734700 kubelet[2591]: E0923 10:44:26.795750    2591 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-734700?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Sep 23 10:44:27 functional-734700 kubelet[2591]: E0923 10:44:27.000825    2591 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-734700.17f7d99b9d0e3e3c  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-734700,UID:663bcde97e61adf67c0d9b9636b993c2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Readiness probe failed: Get \"http://127.0.0.1:2381/readyz\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-734700,},FirstTimestamp:2024-09-23 10:44:04.49798918 +0000 UTC m=+43.041404022,LastTimestamp:2024-09-23 10:44:04.49798918 +0000 UTC m=+43.041404022,Count:1,Type:Warning,EventTime:0001-
01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-734700,}"
	Sep 23 10:44:27 functional-734700 kubelet[2591]: I0923 10:44:27.609222    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0eb56995fb6616e5c110fbb61bd24617200477d0d2d18c3dadb8cd7e79b6c5e"
	Sep 23 10:44:27 functional-734700 kubelet[2591]: I0923 10:44:27.709296    2591 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="567cdad08b1c019c719181f5db2b1e4848b21dfcacd052b87fd96ce4f6d9ff6a"
	Sep 23 10:44:27 functional-734700 kubelet[2591]: I0923 10:44:27.710731    2591 status_manager.go:851] "Failed to get status for pod" podUID="7c1fbec1-bfde-4344-83de-2f498ff7c38a" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:27 functional-734700 kubelet[2591]: I0923 10:44:27.711469    2591 status_manager.go:851] "Failed to get status for pod" podUID="aebb65c4c0d3ac1bf535bb11209d59fd" pod="kube-system/kube-scheduler-functional-734700" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-734700\": dial tcp 192.168.49.2:8441: connect: connection refused"
	Sep 23 10:44:28 functional-734700 kubelet[2591]: I0923 10:44:28.839827    2591 scope.go:117] "RemoveContainer" containerID="d87f0579efadf3c71f5122dacf1b389df317c24fb870c6b4eacc232ea26ede4c"
	Sep 23 10:44:28 functional-734700 kubelet[2591]: I0923 10:44:28.840224    2591 scope.go:117] "RemoveContainer" containerID="7d4530f2baf46d9712bb1398cf713961eb3ff6a17961dd05b51f586f525ae9e6"
	Sep 23 10:44:28 functional-734700 kubelet[2591]: E0923 10:44:28.840544    2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7c1fbec1-bfde-4344-83de-2f498ff7c38a)\"" pod="kube-system/storage-provisioner" podUID="7c1fbec1-bfde-4344-83de-2f498ff7c38a"
	Sep 23 10:44:30 functional-734700 kubelet[2591]: I0923 10:44:30.297272    2591 scope.go:117] "RemoveContainer" containerID="7d4530f2baf46d9712bb1398cf713961eb3ff6a17961dd05b51f586f525ae9e6"
	Sep 23 10:44:30 functional-734700 kubelet[2591]: E0923 10:44:30.297538    2591 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7c1fbec1-bfde-4344-83de-2f498ff7c38a)\"" pod="kube-system/storage-provisioner" podUID="7c1fbec1-bfde-4344-83de-2f498ff7c38a"
	Sep 23 10:44:44 functional-734700 kubelet[2591]: I0923 10:44:44.797880    2591 scope.go:117] "RemoveContainer" containerID="7d4530f2baf46d9712bb1398cf713961eb3ff6a17961dd05b51f586f525ae9e6"
	
	
	==> storage-provisioner [6a0c0f06854d] <==
	I0923 10:44:45.084797       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:44:45.103144       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:44:45.103367       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [7d4530f2baf4] <==
	I0923 10:44:26.516360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 10:44:26.603548       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-734700 -n functional-734700
helpers_test.go:261: (dbg) Run:  kubectl --context functional-734700 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (5.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (410.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-656000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0923 11:55:11.826551    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:55:15.059998    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:55:15.169944    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p old-k8s-version-656000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: exit status 102 (6m44.0725737s)

                                                
                                                
-- stdout --
	* [old-k8s-version-656000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-656000" primary control-plane node in "old-k8s-version-656000" cluster
	* Pulling base image v0.0.45-1726784731-19672 ...
	* Restarting existing docker container for "old-k8s-version-656000" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.3.0 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-656000 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:55:11.228326    3272 out.go:345] Setting OutFile to fd 1504 ...
	I0923 11:55:11.313222    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:11.313222    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 11:55:11.313222    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:11.335078    3272 out.go:352] Setting JSON to false
	I0923 11:55:11.337808    3272 start.go:129] hostinfo: {"hostname":"minikube4","uptime":53074,"bootTime":1727039437,"procs":203,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 11:55:11.337808    3272 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 11:55:11.341844    3272 out.go:177] * [old-k8s-version-656000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 11:55:11.344293    3272 notify.go:220] Checking for updates...
	I0923 11:55:11.345173    3272 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 11:55:11.348009    3272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:55:11.350830    3272 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 11:55:11.352625    3272 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:55:11.355279    3272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:55:11.358048    3272 config.go:182] Loaded profile config "old-k8s-version-656000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 11:55:11.360672    3272 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0923 11:55:11.366316    3272 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:55:11.553451    3272 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 11:55:11.565729    3272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:55:11.891151    3272 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:92 SystemTime:2024-09-23 11:55:11.863347376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:55:11.895732    3272 out.go:177] * Using the docker driver based on existing profile
	I0923 11:55:11.898983    3272 start.go:297] selected driver: docker
	I0923 11:55:11.899055    3272 start.go:901] validating driver "docker" against &{Name:old-k8s-version-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-656000 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:55:11.899228    3272 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:55:12.026900    3272 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:55:12.350411    3272 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:98 OomKillDisable:true NGoroutines:92 SystemTime:2024-09-23 11:55:12.318393109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 11:55:12.351368    3272 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:55:12.351495    3272 cni.go:84] Creating CNI manager for ""
	I0923 11:55:12.351552    3272 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 11:55:12.351881    3272 start.go:340] cluster config:
	{Name:old-k8s-version-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:55:12.356257    3272 out.go:177] * Starting "old-k8s-version-656000" primary control-plane node in "old-k8s-version-656000" cluster
	I0923 11:55:12.361688    3272 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 11:55:12.363699    3272 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 11:55:12.366694    3272 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:55:12.366694    3272 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 11:55:12.367373    3272 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 11:55:12.367373    3272 cache.go:56] Caching tarball of preloaded images
	I0923 11:55:12.367373    3272 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 11:55:12.367373    3272 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0923 11:55:12.368037    3272 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\config.json ...
	I0923 11:55:12.471483    3272 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 11:55:12.471483    3272 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 11:55:12.471483    3272 cache.go:194] Successfully downloaded all kic artifacts
	I0923 11:55:12.471483    3272 start.go:360] acquireMachinesLock for old-k8s-version-656000: {Name:mk1a6b960c1ba8bad4b251f44cb4f6be0adac039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 11:55:12.471483    3272 start.go:364] duration metric: took 0s to acquireMachinesLock for "old-k8s-version-656000"
	I0923 11:55:12.471483    3272 start.go:96] Skipping create...Using existing machine configuration
	I0923 11:55:12.471483    3272 fix.go:54] fixHost starting: 
	I0923 11:55:12.488482    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:12.568868    3272 fix.go:112] recreateIfNeeded on old-k8s-version-656000: state=Stopped err=<nil>
	W0923 11:55:12.568868    3272 fix.go:138] unexpected machine state, will restart: <nil>
	I0923 11:55:12.572868    3272 out.go:177] * Restarting existing docker container for "old-k8s-version-656000" ...
	I0923 11:55:12.583864    3272 cli_runner.go:164] Run: docker start old-k8s-version-656000
	I0923 11:55:13.279564    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:13.365623    3272 kic.go:430] container "old-k8s-version-656000" state is running.
	I0923 11:55:13.376621    3272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-656000
	I0923 11:55:13.453629    3272 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\config.json ...
	I0923 11:55:13.457639    3272 machine.go:93] provisionDockerMachine start ...
	I0923 11:55:13.465662    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:13.541637    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:13.541637    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:13.541637    3272 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 11:55:13.544925    3272 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0923 11:55:16.747142    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-656000
	
	I0923 11:55:16.747217    3272 ubuntu.go:169] provisioning hostname "old-k8s-version-656000"
	I0923 11:55:16.757908    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:16.833294    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:16.833294    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:16.833294    3272 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-656000 && echo "old-k8s-version-656000" | sudo tee /etc/hostname
	I0923 11:55:17.049586    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-656000
	
	I0923 11:55:17.066015    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:17.142580    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:17.143569    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:17.143569    3272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-656000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-656000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-656000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 11:55:17.341106    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:55:17.341106    3272 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0923 11:55:17.341106    3272 ubuntu.go:177] setting up certificates
	I0923 11:55:17.341106    3272 provision.go:84] configureAuth start
	I0923 11:55:17.350057    3272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-656000
	I0923 11:55:17.427020    3272 provision.go:143] copyHostCerts
	I0923 11:55:17.428032    3272 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0923 11:55:17.428032    3272 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0923 11:55:17.428032    3272 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 11:55:17.429022    3272 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0923 11:55:17.429022    3272 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0923 11:55:17.429022    3272 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 11:55:17.430025    3272 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0923 11:55:17.430025    3272 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0923 11:55:17.431034    3272 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0923 11:55:17.432035    3272 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.old-k8s-version-656000 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-656000]
	I0923 11:55:17.951119    3272 provision.go:177] copyRemoteCerts
	I0923 11:55:17.962121    3272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 11:55:17.972448    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:18.047295    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:18.173645    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 11:55:18.219995    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0923 11:55:18.273666    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 11:55:18.329437    3272 provision.go:87] duration metric: took 988.2842ms to configureAuth
	I0923 11:55:18.329437    3272 ubuntu.go:193] setting minikube options for container-runtime
	I0923 11:55:18.330450    3272 config.go:182] Loaded profile config "old-k8s-version-656000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 11:55:18.341334    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:18.426091    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:18.427125    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:18.427242    3272 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 11:55:18.625617    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 11:55:18.625617    3272 ubuntu.go:71] root file system type: overlay
	I0923 11:55:18.625617    3272 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 11:55:18.635076    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:18.737166    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:18.738120    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:18.738120    3272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 11:55:18.957386    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 11:55:18.967240    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:19.060085    3272 main.go:141] libmachine: Using SSH client type: native
	I0923 11:55:19.061075    3272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63419 <nil> <nil>}
	I0923 11:55:19.061075    3272 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 11:55:19.265770    3272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 11:55:19.265770    3272 machine.go:96] duration metric: took 5.8078555s to provisionDockerMachine
	I0923 11:55:19.265770    3272 start.go:293] postStartSetup for "old-k8s-version-656000" (driver="docker")
	I0923 11:55:19.265770    3272 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:55:19.277542    3272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:55:19.285352    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:19.381345    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:19.535773    3272 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 11:55:19.547895    3272 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:55:19.547895    3272 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:55:19.547895    3272 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:55:19.547895    3272 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 11:55:19.547895    3272 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0923 11:55:19.547895    3272 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0923 11:55:19.549227    3272 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem -> 43162.pem in /etc/ssl/certs
	I0923 11:55:19.564804    3272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 11:55:19.590455    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem --> /etc/ssl/certs/43162.pem (1708 bytes)
	I0923 11:55:19.649597    3272 start.go:296] duration metric: took 383.7758ms for postStartSetup
	I0923 11:55:19.659917    3272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:55:19.668886    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:19.737926    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:19.890619    3272 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 11:55:19.903513    3272 fix.go:56] duration metric: took 7.4316781s for fixHost
	I0923 11:55:19.903513    3272 start.go:83] releasing machines lock for "old-k8s-version-656000", held for 7.4316781s
	I0923 11:55:19.913816    3272 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-656000
	I0923 11:55:19.986886    3272 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 11:55:19.995912    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:19.995912    3272 ssh_runner.go:195] Run: cat /version.json
	I0923 11:55:20.006903    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:20.067889    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:20.074885    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	W0923 11:55:20.183079    3272 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 11:55:20.203162    3272 ssh_runner.go:195] Run: systemctl --version
	I0923 11:55:20.232151    3272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:55:20.259521    3272 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0923 11:55:20.280284    3272 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 11:55:20.291286    3272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	W0923 11:55:20.301289    3272 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 11:55:20.301289    3272 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 11:55:20.338779    3272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0923 11:55:20.374204    3272 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 11:55:20.374204    3272 start.go:495] detecting cgroup driver to use...
	I0923 11:55:20.374204    3272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:55:20.374752    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:55:20.422558    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0923 11:55:20.464020    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:55:20.487944    3272 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:55:20.499928    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:55:20.532375    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:55:20.570718    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:55:20.601545    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:55:20.635693    3272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:55:20.666490    3272 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:55:20.710528    3272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:55:20.743011    3272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:55:20.776308    3272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:55:20.942990    3272 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 11:55:21.155851    3272 start.go:495] detecting cgroup driver to use...
	I0923 11:55:21.155851    3272 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:55:21.171159    3272 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 11:55:21.198884    3272 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 11:55:21.214683    3272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 11:55:21.236507    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:55:21.293430    3272 ssh_runner.go:195] Run: which cri-dockerd
	I0923 11:55:21.321401    3272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:55:21.343407    3272 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0923 11:55:21.393849    3272 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 11:55:21.610724    3272 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 11:55:21.776969    3272 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:55:21.777659    3272 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:55:21.829467    3272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:55:22.015886    3272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:55:22.921097    3272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:55:22.989273    3272 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 11:55:23.063705    3272 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.3.0 ...
	I0923 11:55:23.079776    3272 cli_runner.go:164] Run: docker exec -t old-k8s-version-656000 dig +short host.docker.internal
	I0923 11:55:23.251299    3272 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 11:55:23.262258    3272 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 11:55:23.277038    3272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:55:23.317132    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:23.403620    3272 kubeadm.go:883] updating cluster {Name:old-k8s-version-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-656000 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\je
nkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:55:23.403620    3272 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 11:55:23.411617    3272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:55:23.456630    3272 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0923 11:55:23.456630    3272 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0923 11:55:23.466646    3272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:55:23.507399    3272 ssh_runner.go:195] Run: which lz4
	I0923 11:55:23.538907    3272 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0923 11:55:23.546902    3272 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0923 11:55:23.546902    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (401930599 bytes)
	I0923 11:55:31.401909    3272 docker.go:649] duration metric: took 7.8795414s to copy over tarball
	I0923 11:55:31.418656    3272 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0923 11:55:36.691117    3272 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.272211s)
	I0923 11:55:36.691117    3272 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0923 11:55:36.804118    3272 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0923 11:55:36.824904    3272 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2824 bytes)
	I0923 11:55:36.874847    3272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:55:37.035564    3272 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 11:55:44.838062    3272 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.8021276s)
	I0923 11:55:44.855011    3272 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 11:55:44.904007    3272 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0923 11:55:44.904007    3272 docker.go:691] registry.k8s.io/kube-apiserver:v1.20.0 wasn't preloaded
	I0923 11:55:44.904007    3272 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0923 11:55:44.925067    3272 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:55:44.937016    3272 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:55:44.947024    3272 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:55:44.947024    3272 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:55:44.955013    3272 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:55:44.956033    3272 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:55:44.967024    3272 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:55:44.970065    3272 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:55:44.970065    3272 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0923 11:55:44.978021    3272 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:55:44.983041    3272 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0923 11:55:44.983041    3272 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:55:44.993020    3272 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0923 11:55:44.996019    3272 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0923 11:55:45.001016    3272 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0923 11:55:45.012038    3272 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	W0923 11:55:45.069031    3272 image.go:188] authn lookup for registry.k8s.io/kube-proxy:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.166828    3272 image.go:188] authn lookup for registry.k8s.io/kube-controller-manager:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.265850    3272 image.go:188] authn lookup for registry.k8s.io/kube-apiserver:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.360290    3272 image.go:188] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.457865    3272 image.go:188] authn lookup for registry.k8s.io/kube-scheduler:v1.20.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.552870    3272 image.go:188] authn lookup for registry.k8s.io/coredns:1.7.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0923 11:55:45.646115    3272 image.go:188] authn lookup for registry.k8s.io/pause:3.2 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0923 11:55:45.666131    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:55:45.679124    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:55:45.705139    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:55:45.727160    3272 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0923 11:55:45.727160    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0923 11:55:45.727160    3272 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:55:45.729139    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:55:45.742121    3272 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0923 11:55:45.742121    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0923 11:55:45.742121    3272 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0923 11:55:45.742121    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.20.0
	I0923 11:55:45.755121    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.20.0
	W0923 11:55:45.770124    3272 image.go:188] authn lookup for registry.k8s.io/etcd:3.4.13-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0923 11:55:45.805365    3272 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0923 11:55:45.805365    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0923 11:55:45.805365    3272 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:55:45.816351    3272 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0923 11:55:45.816351    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.20.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0923 11:55:45.816351    3272 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:55:45.822341    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0923 11:55:45.836335    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0923 11:55:45.848333    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0923 11:55:45.857340    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0923 11:55:45.915099    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0
	I0923 11:55:45.932106    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.20.0
	I0923 11:55:45.936083    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:55:46.003760    3272 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0923 11:55:46.003760    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.20.0
	I0923 11:55:46.003760    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.20.0
	I0923 11:55:46.003760    3272 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0923 11:55:46.003760    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.2 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0923 11:55:46.003760    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.7.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0923 11:55:46.003760    3272 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0923 11:55:46.003760    3272 docker.go:337] Removing image: registry.k8s.io/coredns:1.7.0
	I0923 11:55:46.010764    3272 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0923 11:55:46.020787    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0923 11:55:46.023776    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.7.0
	I0923 11:55:46.095765    3272 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0923 11:55:46.096766    3272 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.13-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0923 11:55:46.096766    3272 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0923 11:55:46.104780    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.2
	I0923 11:55:46.105767    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.7.0
	I0923 11:55:46.108756    3272 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.13-0
	I0923 11:55:46.153151    3272 cache_images.go:289] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.13-0
	I0923 11:55:46.153536    3272 cache_images.go:92] duration metric: took 1.2494046s to LoadCachedImages
	W0923 11:55:46.153705    3272 out.go:270] X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	X Unable to load cached images: LoadCachedImages: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.20.0: The system cannot find the file specified.
	I0923 11:55:46.153776    3272 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 docker true true} ...
	I0923 11:55:46.154020    3272 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-656000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 11:55:46.164812    3272 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 11:55:46.295876    3272 cni.go:84] Creating CNI manager for ""
	I0923 11:55:46.295949    3272 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 11:55:46.296001    3272 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:55:46.296060    3272 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-656000 NodeName:old-k8s-version-656000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stat
icPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0923 11:55:46.296424    3272 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-656000"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:55:46.320949    3272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0923 11:55:46.345936    3272 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 11:55:46.357937    3272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:55:46.387954    3272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (349 bytes)
	I0923 11:55:46.421943    3272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:55:46.460951    3272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2121 bytes)
	I0923 11:55:46.522572    3272 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0923 11:55:46.534541    3272 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 11:55:46.586066    3272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:55:46.801564    3272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:55:46.832534    3272 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000 for IP: 192.168.103.2
	I0923 11:55:46.832534    3272 certs.go:194] generating shared ca certs ...
	I0923 11:55:46.832534    3272 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:55:46.833552    3272 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0923 11:55:46.833552    3272 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0923 11:55:46.833552    3272 certs.go:256] generating profile certs ...
	I0923 11:55:46.834550    3272 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\client.key
	I0923 11:55:46.834550    3272 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\apiserver.key.733740e9
	I0923 11:55:46.835544    3272 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\proxy-client.key
	I0923 11:55:46.837533    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316.pem (1338 bytes)
	W0923 11:55:46.838549    3272 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316_empty.pem, impossibly tiny 0 bytes
	I0923 11:55:46.838549    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0923 11:55:46.838549    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0923 11:55:46.839541    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0923 11:55:46.839541    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0923 11:55:46.840532    3272 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem (1708 bytes)
	I0923 11:55:46.842537    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:55:46.910529    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:55:46.994524    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:55:47.086544    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 11:55:47.211556    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0923 11:55:47.289170    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:55:47.348003    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:55:47.428018    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-656000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:55:47.501115    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4316.pem --> /usr/share/ca-certificates/4316.pem (1338 bytes)
	I0923 11:55:47.612104    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem --> /usr/share/ca-certificates/43162.pem (1708 bytes)
	I0923 11:55:47.666133    3272 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:55:47.721436    3272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:55:47.807986    3272 ssh_runner.go:195] Run: openssl version
	I0923 11:55:47.835971    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4316.pem && ln -fs /usr/share/ca-certificates/4316.pem /etc/ssl/certs/4316.pem"
	I0923 11:55:47.911613    3272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4316.pem
	I0923 11:55:47.998240    3272 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 23 10:42 /usr/share/ca-certificates/4316.pem
	I0923 11:55:48.016221    3272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4316.pem
	I0923 11:55:48.055270    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4316.pem /etc/ssl/certs/51391683.0"
	I0923 11:55:48.108256    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/43162.pem && ln -fs /usr/share/ca-certificates/43162.pem /etc/ssl/certs/43162.pem"
	I0923 11:55:48.141254    3272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/43162.pem
	I0923 11:55:48.191723    3272 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 23 10:42 /usr/share/ca-certificates/43162.pem
	I0923 11:55:48.205705    3272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/43162.pem
	I0923 11:55:48.236721    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/43162.pem /etc/ssl/certs/3ec20f2e.0"
	I0923 11:55:48.279920    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:55:48.324911    3272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:55:48.374844    3272 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:55:48.402245    3272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:55:48.429194    3272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:55:48.506192    3272 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:55:48.596340    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0923 11:55:48.626023    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0923 11:55:48.659903    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0923 11:55:48.702903    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0923 11:55:48.733924    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0923 11:55:48.816105    3272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0923 11:55:48.890911    3272 kubeadm.go:392] StartCluster: {Name:old-k8s-version-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-656000 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenki
ns.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:55:48.908931    3272 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:55:49.103965    3272 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:55:49.183835    3272 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0923 11:55:49.183835    3272 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0923 11:55:49.207826    3272 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0923 11:55:49.271834    3272 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0923 11:55:49.286827    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:49.384848    3272 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-656000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 11:55:49.385833    3272 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-656000" cluster setting kubeconfig missing "old-k8s-version-656000" context setting]
	I0923 11:55:49.387856    3272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:55:49.429007    3272 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0923 11:55:49.518440    3272 kubeadm.go:630] The running cluster does not require reconfiguration: 127.0.0.1
	I0923 11:55:49.518440    3272 kubeadm.go:597] duration metric: took 334.5891ms to restartPrimaryControlPlane
	I0923 11:55:49.518440    3272 kubeadm.go:394] duration metric: took 627.4991ms to StartCluster
	I0923 11:55:49.518440    3272 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:55:49.518440    3272 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 11:55:49.521449    3272 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:55:49.523471    3272 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:55:49.523471    3272 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0923 11:55:49.523471    3272 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-656000"
	I0923 11:55:49.524443    3272 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-656000"
	W0923 11:55:49.524443    3272 addons.go:243] addon storage-provisioner should already be in state true
	I0923 11:55:49.524443    3272 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-656000"
	I0923 11:55:49.524443    3272 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-656000"
	I0923 11:55:49.524443    3272 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-656000"
	I0923 11:55:49.524443    3272 addons.go:69] Setting dashboard=true in profile "old-k8s-version-656000"
	I0923 11:55:49.524443    3272 host.go:66] Checking if "old-k8s-version-656000" exists ...
	I0923 11:55:49.524443    3272 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-656000"
	W0923 11:55:49.524443    3272 addons.go:243] addon metrics-server should already be in state true
	I0923 11:55:49.524443    3272 host.go:66] Checking if "old-k8s-version-656000" exists ...
	I0923 11:55:49.524443    3272 addons.go:234] Setting addon dashboard=true in "old-k8s-version-656000"
	W0923 11:55:49.524443    3272 addons.go:243] addon dashboard should already be in state true
	I0923 11:55:49.524443    3272 host.go:66] Checking if "old-k8s-version-656000" exists ...
	I0923 11:55:49.524443    3272 config.go:182] Loaded profile config "old-k8s-version-656000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 11:55:49.528443    3272 out.go:177] * Verifying Kubernetes components...
	I0923 11:55:49.564467    3272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 11:55:49.564467    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:49.566447    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:49.568462    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:49.569450    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:49.663530    3272 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0923 11:55:49.666554    3272 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:55:49.666554    3272 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-656000"
	W0923 11:55:49.666554    3272 addons.go:243] addon default-storageclass should already be in state true
	I0923 11:55:49.666554    3272 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0923 11:55:49.666554    3272 host.go:66] Checking if "old-k8s-version-656000" exists ...
	I0923 11:55:49.668539    3272 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:55:49.668539    3272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:55:49.672544    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0923 11:55:49.672544    3272 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0923 11:55:49.677546    3272 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0923 11:55:49.683546    3272 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:55:49.683546    3272 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:55:49.692550    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:49.692550    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:49.699537    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:49.700620    3272 cli_runner.go:164] Run: docker container inspect old-k8s-version-656000 --format={{.State.Status}}
	I0923 11:55:49.763538    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:49.763538    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:49.768530    3272 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:55:49.768530    3272 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:55:49.768530    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:49.786568    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:49.854535    3272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63419 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\old-k8s-version-656000\id_rsa Username:docker}
	I0923 11:55:50.400049    3272 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 11:55:50.488298    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0923 11:55:50.488298    3272 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0923 11:55:50.574300    3272 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:55:50.574300    3272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0923 11:55:50.604297    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:55:50.604297    3272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-656000
	I0923 11:55:50.607306    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:55:50.674302    3272 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-656000" to be "Ready" ...
	I0923 11:55:50.772838    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0923 11:55:50.772914    3272 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0923 11:55:50.788759    3272 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:55:50.788759    3272 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:55:51.076517    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0923 11:55:51.076517    3272 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0923 11:55:51.089110    3272 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:55:51.089110    3272 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:55:51.292408    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0923 11:55:51.292408    3272 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0923 11:55:51.312449    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:55:51.486227    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0923 11:55:51.486227    3272 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0923 11:55:51.581294    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:51.581354    3272 retry.go:31] will retry after 248.825688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 11:55:51.581478    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:51.581478    3272 retry.go:31] will retry after 201.633248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:51.692372    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0923 11:55:51.692372    3272 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0923 11:55:51.805933    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:55:51.840605    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:55:51.883375    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0923 11:55:51.883375    3272 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0923 11:55:51.992406    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:51.992458    3272 retry.go:31] will retry after 195.483174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:52.093706    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0923 11:55:52.093898    3272 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0923 11:55:52.208038    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:55:52.478937    3272 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 11:55:52.478937    3272 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0923 11:55:52.680776    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:52.680776    3272 retry.go:31] will retry after 208.495355ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:52.877942    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.037288s)
	W0923 11:55:52.877942    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:52.878951    3272 retry.go:31] will retry after 339.305051ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:52.902942    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 11:55:52.911938    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:55:53.234902    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:55:53.287912    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.0798228s)
	W0923 11:55:53.287912    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:53.287912    3272 retry.go:31] will retry after 444.951811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:53.752226    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:55:53.986560    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.0835667s)
	I0923 11:55:53.986560    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.0745704s)
	W0923 11:55:53.986560    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0923 11:55:53.986560    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:53.986560    3272 retry.go:31] will retry after 323.026423ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:53.986560    3272 retry.go:31] will retry after 413.749091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:54.282148    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0436526s)
	W0923 11:55:54.282264    3272 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:54.282264    3272 retry.go:31] will retry after 531.748424ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0923 11:55:54.322559    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0923 11:55:54.413894    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:55:54.834611    3272 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:04.674653    3272 node_ready.go:49] node "old-k8s-version-656000" has status "Ready":"True"
	I0923 11:56:04.674653    3272 node_ready.go:38] duration metric: took 13.9996871s for node "old-k8s-version-656000" to be "Ready" ...
	I0923 11:56:04.674653    3272 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:56:04.792858    3272 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fvz5d" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:05.280200    3272 pod_ready.go:93] pod "coredns-74ff55c5b-fvz5d" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:05.280231    3272 pod_ready.go:82] duration metric: took 487.3505ms for pod "coredns-74ff55c5b-fvz5d" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:05.280231    3272 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:05.588327    3272 pod_ready.go:93] pod "etcd-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:05.588327    3272 pod_ready.go:82] duration metric: took 308.081ms for pod "etcd-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:05.588327    3272 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:07.681082    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.9281948s)
	I0923 11:56:07.681082    3272 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-656000"
	I0923 11:56:07.702098    3272 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:09.690470    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.3671824s)
	I0923 11:56:09.690470    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.2758519s)
	I0923 11:56:09.690470    3272 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.8551544s)
	I0923 11:56:09.693457    3272 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-656000 addons enable metrics-server
	
	I0923 11:56:09.775839    3272 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:09.883446    3272 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0923 11:56:09.890442    3272 addons.go:510] duration metric: took 20.3660049s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0923 11:56:12.106236    3272 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:14.609539    3272 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:15.603142    3272 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:15.603142    3272 pod_ready.go:82] duration metric: took 10.0143406s for pod "kube-apiserver-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:15.603142    3272 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:15.627155    3272 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:15.627155    3272 pod_ready.go:82] duration metric: took 24.0115ms for pod "kube-controller-manager-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:15.627155    3272 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-mk6ch" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:15.649160    3272 pod_ready.go:93] pod "kube-proxy-mk6ch" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:15.649160    3272 pod_ready.go:82] duration metric: took 22.0037ms for pod "kube-proxy-mk6ch" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:15.649160    3272 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:17.664486    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:19.667615    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:22.162840    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:24.167877    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:26.678257    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:28.688353    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:31.186954    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:33.667125    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:35.673227    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:37.676676    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:40.173207    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:42.668381    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:44.791056    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:47.177777    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:49.178651    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:51.669153    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:53.669861    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:56.180430    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:58.667084    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:00.671431    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:02.672571    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:05.166030    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:07.673318    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:10.170765    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:12.666647    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:14.669589    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:16.671182    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:19.165491    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:21.166128    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:23.169588    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:25.169946    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:27.669650    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:30.167359    3272 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:31.168730    3272 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace has status "Ready":"True"
	I0923 11:57:31.168730    3272 pod_ready.go:82] duration metric: took 1m15.5159945s for pod "kube-scheduler-old-k8s-version-656000" in "kube-system" namespace to be "Ready" ...
	I0923 11:57:31.168730    3272 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace to be "Ready" ...
	I0923 11:57:33.184252    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:35.186884    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:37.684474    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:40.186218    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:42.687056    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:45.198222    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:47.686220    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:50.189011    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:52.686686    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:54.688232    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:57.185758    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:57:59.186668    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:01.686997    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:04.187240    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:06.187869    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:08.686038    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:10.688103    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:12.690406    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:15.185539    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:17.686895    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:20.188110    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:22.688679    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:25.187399    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:27.690654    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:29.692833    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:32.185127    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:34.187418    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:36.688589    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:38.692211    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:41.188592    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:43.688506    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:46.186119    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:48.187566    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:50.188062    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:52.188408    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:54.189889    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:56.695583    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:58:59.187951    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:01.688966    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:04.188155    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:06.691884    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:09.188285    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:11.190247    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:13.691474    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:16.188591    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:18.691654    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:21.188675    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:23.189808    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:25.190268    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:27.691939    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:29.692650    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:32.189824    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:34.191943    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:36.690938    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:39.199166    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:41.689133    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:43.692088    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:45.699802    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:48.192150    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:50.193506    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:52.706140    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:55.193964    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 11:59:57.692090    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:00.193051    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:02.691118    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:05.189569    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:07.192791    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:09.691054    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:11.691818    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:14.191067    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:16.192918    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:18.692304    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:21.194631    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:23.692603    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:26.218976    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:28.693590    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:31.192122    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:33.201145    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:35.693714    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:38.191066    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:40.193767    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:42.696014    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:45.194896    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:47.691968    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:49.695518    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:51.699848    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:54.195735    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:56.208784    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:00:58.694982    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:00.700452    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:03.191268    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:05.196301    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:07.198132    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:09.702282    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:12.199129    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:14.221088    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:16.702160    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:19.193062    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:21.198106    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:24.144156    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:26.205538    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:28.702789    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:31.180338    3272 pod_ready.go:82] duration metric: took 4m0.0002592s for pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace to be "Ready" ...
	E0923 12:01:31.180538    3272 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:01:31.180538    3272 pod_ready.go:39] duration metric: took 5m26.4904396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:01:31.180538    3272 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:01:31.189652    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:01:31.237094    3272 logs.go:276] 2 containers: [cf370ce76d59 710a0ba13429]
	I0923 12:01:31.246877    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:01:31.292057    3272 logs.go:276] 2 containers: [649f5bcaa4f1 5fde0ebfccf6]
	I0923 12:01:31.303333    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:01:31.353774    3272 logs.go:276] 2 containers: [6a0ae0205b95 cd2751a49ca4]
	I0923 12:01:31.363056    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:01:31.403714    3272 logs.go:276] 2 containers: [8dfd1e04a342 9dfaf3fd956f]
	I0923 12:01:31.418097    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:01:31.470155    3272 logs.go:276] 2 containers: [86d33ed9ed44 2e4fea7d2041]
	I0923 12:01:31.480778    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:01:31.523776    3272 logs.go:276] 2 containers: [8fb2248f24f8 f1a54f9ee3db]
	I0923 12:01:31.532784    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:01:31.584149    3272 logs.go:276] 0 containers: []
	W0923 12:01:31.584149    3272 logs.go:278] No container was found matching "kindnet"
	I0923 12:01:31.595346    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:01:31.642314    3272 logs.go:276] 1 containers: [2b7dc21ea030]
	I0923 12:01:31.655758    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:01:31.693308    3272 logs.go:276] 2 containers: [f26f0d83255a 434662eca49c]
	I0923 12:01:31.693308    3272 logs.go:123] Gathering logs for kube-apiserver [710a0ba13429] ...
	I0923 12:01:31.693308    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710a0ba13429"
	I0923 12:01:31.801618    3272 logs.go:123] Gathering logs for etcd [649f5bcaa4f1] ...
	I0923 12:01:31.801618    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5bcaa4f1"
	I0923 12:01:31.865922    3272 logs.go:123] Gathering logs for etcd [5fde0ebfccf6] ...
	I0923 12:01:31.865922    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fde0ebfccf6"
	I0923 12:01:31.924795    3272 logs.go:123] Gathering logs for kube-proxy [86d33ed9ed44] ...
	I0923 12:01:31.924795    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d33ed9ed44"
	I0923 12:01:31.972288    3272 logs.go:123] Gathering logs for kubernetes-dashboard [2b7dc21ea030] ...
	I0923 12:01:31.972384    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7dc21ea030"
	I0923 12:01:32.021434    3272 logs.go:123] Gathering logs for kubelet ...
	I0923 12:01:32.021434    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:01:32.104879    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:09 old-k8s-version-656000 kubelet[1893]: E0923 11:56:09.975569    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.106874    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:12 old-k8s-version-656000 kubelet[1893]: E0923 11:56:12.641934    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.107411    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:13 old-k8s-version-656000 kubelet[1893]: E0923 11:56:13.742174    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.111458    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:29 old-k8s-version-656000 kubelet[1893]: E0923 11:56:29.020726    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.113062    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:33 old-k8s-version-656000 kubelet[1893]: E0923 11:56:33.177742    1893 pod_workers.go:191] Error syncing pod 5ea83d08-b331-4bfa-995f-9856437c78ec ("storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"
	W0923 12:01:32.113465    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:43 old-k8s-version-656000 kubelet[1893]: E0923 11:56:43.963428    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.115459    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.122782    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.116026    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.777992    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.118208    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:58 old-k8s-version-656000 kubelet[1893]: E0923 11:56:58.985647    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:10 old-k8s-version-656000 kubelet[1893]: E0923 11:57:10.376499    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:12 old-k8s-version-656000 kubelet[1893]: E0923 11:57:12.902413    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:20 old-k8s-version-656000 kubelet[1893]: E0923 11:57:20.903057    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.121513    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:27 old-k8s-version-656000 kubelet[1893]: E0923 11:57:27.903462    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.123493    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:32 old-k8s-version-656000 kubelet[1893]: E0923 11:57:32.470797    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:40 old-k8s-version-656000 kubelet[1893]: E0923 11:57:40.005343    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:46 old-k8s-version-656000 kubelet[1893]: E0923 11:57:46.898299    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:50 old-k8s-version-656000 kubelet[1893]: E0923 11:57:50.897643    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:57 old-k8s-version-656000 kubelet[1893]: E0923 11:57:57.897696    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:02 old-k8s-version-656000 kubelet[1893]: E0923 11:58:02.898254    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:09 old-k8s-version-656000 kubelet[1893]: E0923 11:58:09.893616    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:13 old-k8s-version-656000 kubelet[1893]: E0923 11:58:13.893937    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.128836    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:21 old-k8s-version-656000 kubelet[1893]: E0923 11:58:21.372831    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.128836    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:26 old-k8s-version-656000 kubelet[1893]: E0923 11:58:26.893634    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:33 old-k8s-version-656000 kubelet[1893]: E0923 11:58:33.905339    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:38 old-k8s-version-656000 kubelet[1893]: E0923 11:58:38.889353    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:47 old-k8s-version-656000 kubelet[1893]: E0923 11:58:47.889599    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:53 old-k8s-version-656000 kubelet[1893]: E0923 11:58:53.888891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:59 old-k8s-version-656000 kubelet[1893]: E0923 11:58:59.889484    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.133525    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:05 old-k8s-version-656000 kubelet[1893]: E0923 11:59:05.927866    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.134048    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:13 old-k8s-version-656000 kubelet[1893]: E0923 11:59:13.886312    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134119    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:20 old-k8s-version-656000 kubelet[1893]: E0923 11:59:20.885891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134119    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:28 old-k8s-version-656000 kubelet[1893]: E0923 11:59:28.884672    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134696    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:35 old-k8s-version-656000 kubelet[1893]: E0923 11:59:35.880315    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137088    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:42 old-k8s-version-656000 kubelet[1893]: E0923 11:59:42.382968    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.137150    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:50 old-k8s-version-656000 kubelet[1893]: E0923 11:59:50.879975    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137150    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:54 old-k8s-version-656000 kubelet[1893]: E0923 11:59:54.879656    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:01 old-k8s-version-656000 kubelet[1893]: E0923 12:00:01.880164    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:07 old-k8s-version-656000 kubelet[1893]: E0923 12:00:07.877117    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:13 old-k8s-version-656000 kubelet[1893]: E0923 12:00:13.875328    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:20 old-k8s-version-656000 kubelet[1893]: E0923 12:00:20.875342    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:27 old-k8s-version-656000 kubelet[1893]: E0923 12:00:27.875407    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:31 old-k8s-version-656000 kubelet[1893]: E0923 12:00:31.875521    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139093    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:39 old-k8s-version-656000 kubelet[1893]: E0923 12:00:39.871312    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139093    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:44 old-k8s-version-656000 kubelet[1893]: E0923 12:00:44.893061    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139780    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:50 old-k8s-version-656000 kubelet[1893]: E0923 12:00:50.872230    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.140774    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:58 old-k8s-version-656000 kubelet[1893]: E0923 12:00:58.872565    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.140774    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.142547    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0923 12:01:32.142547    3272 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:01:32.142547    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:01:32.369242    3272 logs.go:123] Gathering logs for container status ...
	I0923 12:01:32.369242    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:01:32.466277    3272 logs.go:123] Gathering logs for kube-scheduler [9dfaf3fd956f] ...
	I0923 12:01:32.466334    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfaf3fd956f"
	I0923 12:01:32.512671    3272 logs.go:123] Gathering logs for storage-provisioner [434662eca49c] ...
	I0923 12:01:32.512671    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 434662eca49c"
	I0923 12:01:32.558452    3272 logs.go:123] Gathering logs for kube-scheduler [8dfd1e04a342] ...
	I0923 12:01:32.558452    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfd1e04a342"
	I0923 12:01:32.601592    3272 logs.go:123] Gathering logs for kube-controller-manager [8fb2248f24f8] ...
	I0923 12:01:32.601734    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fb2248f24f8"
	I0923 12:01:32.668121    3272 logs.go:123] Gathering logs for kube-controller-manager [f1a54f9ee3db] ...
	I0923 12:01:32.668121    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a54f9ee3db"
	I0923 12:01:32.725860    3272 logs.go:123] Gathering logs for storage-provisioner [f26f0d83255a] ...
	I0923 12:01:32.725860    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26f0d83255a"
	I0923 12:01:32.774337    3272 logs.go:123] Gathering logs for Docker ...
	I0923 12:01:32.774337    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:01:32.820269    3272 logs.go:123] Gathering logs for dmesg ...
	I0923 12:01:32.820269    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:01:32.846438    3272 logs.go:123] Gathering logs for coredns [6a0ae0205b95] ...
	I0923 12:01:32.846507    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0ae0205b95"
	I0923 12:01:32.890288    3272 logs.go:123] Gathering logs for kube-proxy [2e4fea7d2041] ...
	I0923 12:01:32.890288    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e4fea7d2041"
	I0923 12:01:32.938699    3272 logs.go:123] Gathering logs for kube-apiserver [cf370ce76d59] ...
	I0923 12:01:32.938699    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf370ce76d59"
	I0923 12:01:32.999494    3272 logs.go:123] Gathering logs for coredns [cd2751a49ca4] ...
	I0923 12:01:32.999494    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2751a49ca4"
	I0923 12:01:33.053859    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:33.053940    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:01:33.054052    3272 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:01:33.054220    3272 out.go:270]   Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054304    3272 out.go:270]   Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0923 12:01:33.054469    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:33.054469    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:43.073650    3272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:01:43.100453    3272 api_server.go:72] duration metric: took 5m53.5602541s to wait for apiserver process to appear ...
	I0923 12:01:43.101474    3272 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:01:43.113479    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:01:43.172147    3272 logs.go:276] 2 containers: [cf370ce76d59 710a0ba13429]
	I0923 12:01:43.181236    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:01:43.239975    3272 logs.go:276] 2 containers: [649f5bcaa4f1 5fde0ebfccf6]
	I0923 12:01:43.251563    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:01:43.305568    3272 logs.go:276] 2 containers: [6a0ae0205b95 cd2751a49ca4]
	I0923 12:01:43.316088    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:01:43.372822    3272 logs.go:276] 2 containers: [8dfd1e04a342 9dfaf3fd956f]
	I0923 12:01:43.385106    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:01:43.436989    3272 logs.go:276] 2 containers: [86d33ed9ed44 2e4fea7d2041]
	I0923 12:01:43.449318    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:01:43.502053    3272 logs.go:276] 2 containers: [8fb2248f24f8 f1a54f9ee3db]
	I0923 12:01:43.515033    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:01:43.564197    3272 logs.go:276] 0 containers: []
	W0923 12:01:43.564247    3272 logs.go:278] No container was found matching "kindnet"
	I0923 12:01:43.574435    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:01:43.624271    3272 logs.go:276] 1 containers: [2b7dc21ea030]
	I0923 12:01:43.636549    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:01:43.678428    3272 logs.go:276] 2 containers: [f26f0d83255a 434662eca49c]
	I0923 12:01:43.678428    3272 logs.go:123] Gathering logs for kube-proxy [2e4fea7d2041] ...
	I0923 12:01:43.678428    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e4fea7d2041"
	I0923 12:01:43.731007    3272 logs.go:123] Gathering logs for kube-controller-manager [8fb2248f24f8] ...
	I0923 12:01:43.731060    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fb2248f24f8"
	I0923 12:01:43.802681    3272 logs.go:123] Gathering logs for kube-controller-manager [f1a54f9ee3db] ...
	I0923 12:01:43.802681    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a54f9ee3db"
	I0923 12:01:43.871273    3272 logs.go:123] Gathering logs for kube-apiserver [cf370ce76d59] ...
	I0923 12:01:43.871273    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf370ce76d59"
	I0923 12:01:43.952941    3272 logs.go:123] Gathering logs for etcd [649f5bcaa4f1] ...
	I0923 12:01:43.952941    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5bcaa4f1"
	I0923 12:01:44.014140    3272 logs.go:123] Gathering logs for coredns [6a0ae0205b95] ...
	I0923 12:01:44.014140    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0ae0205b95"
	I0923 12:01:44.065195    3272 logs.go:123] Gathering logs for kube-scheduler [9dfaf3fd956f] ...
	I0923 12:01:44.065261    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfaf3fd956f"
	I0923 12:01:44.127720    3272 logs.go:123] Gathering logs for storage-provisioner [434662eca49c] ...
	I0923 12:01:44.127720    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 434662eca49c"
	I0923 12:01:44.175006    3272 logs.go:123] Gathering logs for kubelet ...
	I0923 12:01:44.175067    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:01:44.259142    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:09 old-k8s-version-656000 kubelet[1893]: E0923 11:56:09.975569    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.261214    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:12 old-k8s-version-656000 kubelet[1893]: E0923 11:56:12.641934    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.262652    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:13 old-k8s-version-656000 kubelet[1893]: E0923 11:56:13.742174    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.266764    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:29 old-k8s-version-656000 kubelet[1893]: E0923 11:56:29.020726    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.268445    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:33 old-k8s-version-656000 kubelet[1893]: E0923 11:56:33.177742    1893 pod_workers.go:191] Error syncing pod 5ea83d08-b331-4bfa-995f-9856437c78ec ("storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"
	W0923 12:01:44.269054    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:43 old-k8s-version-656000 kubelet[1893]: E0923 11:56:43.963428    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.270875    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.122782    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.271795    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.777992    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.273534    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:58 old-k8s-version-656000 kubelet[1893]: E0923 11:56:58.985647    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.276272    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:10 old-k8s-version-656000 kubelet[1893]: E0923 11:57:10.376499    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.276307    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:12 old-k8s-version-656000 kubelet[1893]: E0923 11:57:12.902413    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.276307    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:20 old-k8s-version-656000 kubelet[1893]: E0923 11:57:20.903057    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.276843    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:27 old-k8s-version-656000 kubelet[1893]: E0923 11:57:27.903462    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.279476    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:32 old-k8s-version-656000 kubelet[1893]: E0923 11:57:32.470797    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.282496    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:40 old-k8s-version-656000 kubelet[1893]: E0923 11:57:40.005343    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.282593    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:46 old-k8s-version-656000 kubelet[1893]: E0923 11:57:46.898299    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.282593    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:50 old-k8s-version-656000 kubelet[1893]: E0923 11:57:50.897643    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283350    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:57 old-k8s-version-656000 kubelet[1893]: E0923 11:57:57.897696    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283350    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:02 old-k8s-version-656000 kubelet[1893]: E0923 11:58:02.898254    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283983    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:09 old-k8s-version-656000 kubelet[1893]: E0923 11:58:09.893616    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283983    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:13 old-k8s-version-656000 kubelet[1893]: E0923 11:58:13.893937    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:21 old-k8s-version-656000 kubelet[1893]: E0923 11:58:21.372831    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:26 old-k8s-version-656000 kubelet[1893]: E0923 11:58:26.893634    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:33 old-k8s-version-656000 kubelet[1893]: E0923 11:58:33.905339    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:38 old-k8s-version-656000 kubelet[1893]: E0923 11:58:38.889353    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:47 old-k8s-version-656000 kubelet[1893]: E0923 11:58:47.889599    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:53 old-k8s-version-656000 kubelet[1893]: E0923 11:58:53.888891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:59 old-k8s-version-656000 kubelet[1893]: E0923 11:58:59.889484    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:05 old-k8s-version-656000 kubelet[1893]: E0923 11:59:05.927866    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:13 old-k8s-version-656000 kubelet[1893]: E0923 11:59:13.886312    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:20 old-k8s-version-656000 kubelet[1893]: E0923 11:59:20.885891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.290548    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:28 old-k8s-version-656000 kubelet[1893]: E0923 11:59:28.884672    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.290548    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:35 old-k8s-version-656000 kubelet[1893]: E0923 11:59:35.880315    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:42 old-k8s-version-656000 kubelet[1893]: E0923 11:59:42.382968    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:50 old-k8s-version-656000 kubelet[1893]: E0923 11:59:50.879975    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:54 old-k8s-version-656000 kubelet[1893]: E0923 11:59:54.879656    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:01 old-k8s-version-656000 kubelet[1893]: E0923 12:00:01.880164    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:07 old-k8s-version-656000 kubelet[1893]: E0923 12:00:07.877117    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:13 old-k8s-version-656000 kubelet[1893]: E0923 12:00:13.875328    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:20 old-k8s-version-656000 kubelet[1893]: E0923 12:00:20.875342    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:27 old-k8s-version-656000 kubelet[1893]: E0923 12:00:27.875407    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:31 old-k8s-version-656000 kubelet[1893]: E0923 12:00:31.875521    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:39 old-k8s-version-656000 kubelet[1893]: E0923 12:00:39.871312    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:44 old-k8s-version-656000 kubelet[1893]: E0923 12:00:44.893061    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:50 old-k8s-version-656000 kubelet[1893]: E0923 12:00:50.872230    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:58 old-k8s-version-656000 kubelet[1893]: E0923 12:00:58.872565    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:01:44.297549    3272 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:01:44.297549    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:01:44.506001    3272 logs.go:123] Gathering logs for etcd [5fde0ebfccf6] ...
	I0923 12:01:44.506111    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fde0ebfccf6"
	I0923 12:01:44.575903    3272 logs.go:123] Gathering logs for coredns [cd2751a49ca4] ...
	I0923 12:01:44.575903    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2751a49ca4"
	I0923 12:01:44.631527    3272 logs.go:123] Gathering logs for kube-scheduler [8dfd1e04a342] ...
	I0923 12:01:44.631575    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfd1e04a342"
	I0923 12:01:44.690320    3272 logs.go:123] Gathering logs for kube-proxy [86d33ed9ed44] ...
	I0923 12:01:44.690386    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d33ed9ed44"
	I0923 12:01:44.750284    3272 logs.go:123] Gathering logs for kubernetes-dashboard [2b7dc21ea030] ...
	I0923 12:01:44.750357    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7dc21ea030"
	I0923 12:01:44.801681    3272 logs.go:123] Gathering logs for dmesg ...
	I0923 12:01:44.801681    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:01:44.829673    3272 logs.go:123] Gathering logs for kube-apiserver [710a0ba13429] ...
	I0923 12:01:44.829673    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710a0ba13429"
	I0923 12:01:44.938481    3272 logs.go:123] Gathering logs for storage-provisioner [f26f0d83255a] ...
	I0923 12:01:44.938481    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26f0d83255a"
	I0923 12:01:44.988050    3272 logs.go:123] Gathering logs for Docker ...
	I0923 12:01:44.988100    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:01:45.046323    3272 logs.go:123] Gathering logs for container status ...
	I0923 12:01:45.046323    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:01:45.151567    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:45.152098    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:01:45.152220    3272 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:01:45.152247    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:45.152247    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:55.154399    3272 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63423/healthz ...
	I0923 12:01:55.172691    3272 api_server.go:279] https://127.0.0.1:63423/healthz returned 200:
	ok
	I0923 12:01:55.176144    3272 out.go:201] 
	W0923 12:01:55.178225    3272 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:01:55.178225    3272 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:01:55.178225    3272 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:01:55.178225    3272 out.go:270] * 
	* 
	W0923 12:01:55.179555    3272 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:01:55.182609    3272 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p old-k8s-version-656000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-656000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-656000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad",
	        "Created": "2024-09-23T11:51:31.662513054Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 346907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T11:55:12.834641674Z",
	            "FinishedAt": "2024-09-23T11:55:09.568834318Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad/hostname",
	        "HostsPath": "/var/lib/docker/containers/5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad/hosts",
	        "LogPath": "/var/lib/docker/containers/5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad/5040522385d20d1972f48fb9366f0b3c5d6d12a84dd66a2e68ef9df185af21ad-json.log",
	        "Name": "/old-k8s-version-656000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-656000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-656000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3fb522fe0c48ce6fc88fc36c18fb8f625e452f3b33a80163bd3a435374bff3f1-init/diff:/var/lib/docker/overlay2/45a1d176e43ae6a4b4b413b83d6ac02867e558bd9182f31de6a362b3112ed40d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3fb522fe0c48ce6fc88fc36c18fb8f625e452f3b33a80163bd3a435374bff3f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3fb522fe0c48ce6fc88fc36c18fb8f625e452f3b33a80163bd3a435374bff3f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3fb522fe0c48ce6fc88fc36c18fb8f625e452f3b33a80163bd3a435374bff3f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-656000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-656000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-656000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-656000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-656000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76dc0b0b5034099d4a536a928a7e5e7fbc24d4fcc9099fa8151cab273c83dbcb",
	            "SandboxKey": "/var/run/docker/netns/76dc0b0b5034",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63419"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63420"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63421"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63422"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63423"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-656000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2349c885d28e4faada6b459f0adfcca56c140ff38ce8d683db94b85119f3867e",
	                    "EndpointID": "8f9ad930ca742ad55240b6e4fcea1350477ebdfe872adcc64c81bcfddf48c6a5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-656000",
	                        "5040522385d2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-656000 -n old-k8s-version-656000
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-656000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p old-k8s-version-656000 logs -n 25: (2.5158797s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-826900             | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |                   |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |                   |         |                     |                     |
	| stop    | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-618200                 | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p embed-certs-618200                                  | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 12:00 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-581000       | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 12:00 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |                   |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |                   |         |                     |                     |
	| addons  | enable dashboard -p no-preload-826900                  | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |                   |         |                     |                     |
	| start   | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 12:00 UTC |
	|         | --memory=2200                                          |                              |                   |         |                     |                     |
	|         | --alsologtostderr                                      |                              |                   |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |                   |         |                     |                     |
	|         | --driver=docker                                        |                              |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |                   |         |                     |                     |
	| image   | embed-certs-618200 image list                          | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:00 UTC | 23 Sep 24 12:00 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| pause   | -p embed-certs-618200                                  | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:00 UTC | 23 Sep 24 12:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| image   | default-k8s-diff-port-581000                           | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | image list --format=json                               |                              |                   |         |                     |                     |
	| unpause | -p embed-certs-618200                                  | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-618200                                  | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	| image   | no-preload-826900 image list                           | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | --format=json                                          |                              |                   |         |                     |                     |
	| delete  | -p embed-certs-618200                                  | embed-certs-618200           | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	| pause   | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| start   | -p newest-cni-895600 --memory=2200 --alsologtostderr   | newest-cni-895600            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |                   |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |                   |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |                   |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |                   |         |                     |                     |
	|         | --driver=docker --kubernetes-version=v1.31.1           |                              |                   |         |                     |                     |
	| unpause | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | --alsologtostderr -v=1                                 |                              |                   |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-581000 | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|         | default-k8s-diff-port-581000                           |                              |                   |         |                     |                     |
	| delete  | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	| delete  | -p no-preload-826900                                   | no-preload-826900            | minikube4\jenkins | v1.34.0 | 23 Sep 24 12:01 UTC | 23 Sep 24 12:01 UTC |
	|---------|--------------------------------------------------------|------------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 12:01:13
	Running on machine: minikube4
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 12:01:13.209137    5872 out.go:345] Setting OutFile to fd 2000 ...
	I0923 12:01:13.293492    5872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:13.293492    5872 out.go:358] Setting ErrFile to fd 1988...
	I0923 12:01:13.293492    5872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:13.318173    5872 out.go:352] Setting JSON to false
	I0923 12:01:13.321138    5872 start.go:129] hostinfo: {"hostname":"minikube4","uptime":53436,"bootTime":1727039437,"procs":199,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 12:01:13.321138    5872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 12:01:13.329148    5872 out.go:177] * [newest-cni-895600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 12:01:13.334146    5872 notify.go:220] Checking for updates...
	I0923 12:01:13.336139    5872 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 12:01:13.340355    5872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:01:13.344915    5872 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 12:01:13.350327    5872 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 12:01:13.356540    5872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:01:13.361668    5872 config.go:182] Loaded profile config "default-k8s-diff-port-581000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:01:13.362272    5872 config.go:182] Loaded profile config "no-preload-826900": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:01:13.363127    5872 config.go:182] Loaded profile config "old-k8s-version-656000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0923 12:01:13.363434    5872 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:01:13.577290    5872 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 12:01:13.586295    5872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:01:13.933846    5872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:82 OomKillDisable:true NGoroutines:93 SystemTime:2024-09-23 12:01:13.90242288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaV
ersion:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://
github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:01:13.940853    5872 out.go:177] * Using the docker driver based on user configuration
	I0923 12:01:13.945842    5872 start.go:297] selected driver: docker
	I0923 12:01:13.945842    5872 start.go:901] validating driver "docker" against <nil>
	I0923 12:01:13.945842    5872 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:01:14.103621    5872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:01:14.475078    5872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:88 OomKillDisable:true NGoroutines:89 SystemTime:2024-09-23 12:01:14.450021403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:01:14.475078    5872 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0923 12:01:14.475078    5872 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0923 12:01:14.477079    5872 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0923 12:01:14.482079    5872 out.go:177] * Using Docker Desktop driver with root privileges
	I0923 12:01:14.486072    5872 cni.go:84] Creating CNI manager for ""
	I0923 12:01:14.486072    5872 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 12:01:14.486072    5872 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 12:01:14.486072    5872 start.go:340] cluster config:
	{Name:newest-cni-895600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-895600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:01:14.494085    5872 out.go:177] * Starting "newest-cni-895600" primary control-plane node in "newest-cni-895600" cluster
	I0923 12:01:14.497074    5872 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 12:01:14.503065    5872 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 12:01:14.507068    5872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:01:14.507068    5872 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 12:01:14.507068    5872 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 12:01:14.507068    5872 cache.go:56] Caching tarball of preloaded images
	I0923 12:01:14.508072    5872 preload.go:172] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 12:01:14.508072    5872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 12:01:14.508072    5872 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600\config.json ...
	I0923 12:01:14.509077    5872 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600\config.json: {Name:mk66f00d6bed5a06d7cd9902e848ead43e977da9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:01:14.618476    5872 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon, skipping pull
	I0923 12:01:14.618476    5872 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in daemon, skipping load
	I0923 12:01:14.618476    5872 cache.go:194] Successfully downloaded all kic artifacts
	I0923 12:01:14.618476    5872 start.go:360] acquireMachinesLock for newest-cni-895600: {Name:mk7a606fc3baef25f6266a9c02c51756b6009298 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 12:01:14.618476    5872 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-895600"
	I0923 12:01:14.618476    5872 start.go:93] Provisioning new machine with config: &{Name:newest-cni-895600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-895600 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 12:01:14.619475    5872 start.go:125] createHost starting for "" (driver="docker")
	I0923 12:01:12.199129    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:14.221088    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:14.627479    5872 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0923 12:01:14.627479    5872 start.go:159] libmachine.API.Create for "newest-cni-895600" (driver="docker")
	I0923 12:01:14.627479    5872 client.go:168] LocalClient.Create starting
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Decoding PEM data...
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Parsing certificate...
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Decoding PEM data...
	I0923 12:01:14.628474    5872 main.go:141] libmachine: Parsing certificate...
	I0923 12:01:14.645486    5872 cli_runner.go:164] Run: docker network inspect newest-cni-895600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 12:01:14.722510    5872 cli_runner.go:211] docker network inspect newest-cni-895600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 12:01:14.733476    5872 network_create.go:284] running [docker network inspect newest-cni-895600] to gather additional debugging logs...
	I0923 12:01:14.733476    5872 cli_runner.go:164] Run: docker network inspect newest-cni-895600
	W0923 12:01:14.814548    5872 cli_runner.go:211] docker network inspect newest-cni-895600 returned with exit code 1
	I0923 12:01:14.814548    5872 network_create.go:287] error running [docker network inspect newest-cni-895600]: docker network inspect newest-cni-895600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-895600 not found
	I0923 12:01:14.814548    5872 network_create.go:289] output of [docker network inspect newest-cni-895600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-895600 not found
	
	** /stderr **
	I0923 12:01:14.825538    5872 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 12:01:14.929057    5872 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:01:14.959551    5872 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:01:14.981535    5872 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016f0990}
	I0923 12:01:14.981535    5872 network_create.go:124] attempt to create docker network newest-cni-895600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0923 12:01:14.989598    5872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895600 newest-cni-895600
	W0923 12:01:15.055546    5872 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895600 newest-cni-895600 returned with exit code 1
	W0923 12:01:15.055546    5872 network_create.go:149] failed to create docker network newest-cni-895600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895600 newest-cni-895600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W0923 12:01:15.055546    5872 network_create.go:116] failed to create docker network newest-cni-895600 192.168.67.0/24, will retry: subnet is taken
	I0923 12:01:15.087585    5872 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0923 12:01:15.112562    5872 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015cdb90}
	I0923 12:01:15.112562    5872 network_create.go:124] attempt to create docker network newest-cni-895600 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0923 12:01:15.125585    5872 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-895600 newest-cni-895600
	I0923 12:01:15.656995    5872 network_create.go:108] docker network newest-cni-895600 192.168.76.0/24 created
	I0923 12:01:15.656995    5872 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-895600" container
	I0923 12:01:15.672889    5872 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 12:01:15.748929    5872 cli_runner.go:164] Run: docker volume create newest-cni-895600 --label name.minikube.sigs.k8s.io=newest-cni-895600 --label created_by.minikube.sigs.k8s.io=true
	I0923 12:01:15.827494    5872 oci.go:103] Successfully created a docker volume newest-cni-895600
	I0923 12:01:15.837602    5872 cli_runner.go:164] Run: docker run --rm --name newest-cni-895600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895600 --entrypoint /usr/bin/test -v newest-cni-895600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 12:01:18.270372    5872 cli_runner.go:217] Completed: docker run --rm --name newest-cni-895600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895600 --entrypoint /usr/bin/test -v newest-cni-895600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.4326254s)
	I0923 12:01:18.270372    5872 oci.go:107] Successfully prepared a docker volume newest-cni-895600
	I0923 12:01:18.270372    5872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:01:18.270372    5872 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 12:01:18.286387    5872 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-895600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 12:01:16.702160    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:19.193062    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:21.198106    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:24.144156    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:26.205538    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:28.702789    3272 pod_ready.go:103] pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace has status "Ready":"False"
	I0923 12:01:31.180338    3272 pod_ready.go:82] duration metric: took 4m0.0002592s for pod "metrics-server-9975d5f86-5pvv2" in "kube-system" namespace to be "Ready" ...
	E0923 12:01:31.180538    3272 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0923 12:01:31.180538    3272 pod_ready.go:39] duration metric: took 5m26.4904396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 12:01:31.180538    3272 api_server.go:52] waiting for apiserver process to appear ...
	I0923 12:01:31.189652    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:01:31.237094    3272 logs.go:276] 2 containers: [cf370ce76d59 710a0ba13429]
	I0923 12:01:31.246877    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:01:31.292057    3272 logs.go:276] 2 containers: [649f5bcaa4f1 5fde0ebfccf6]
	I0923 12:01:31.303333    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:01:31.353774    3272 logs.go:276] 2 containers: [6a0ae0205b95 cd2751a49ca4]
	I0923 12:01:31.363056    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:01:31.403714    3272 logs.go:276] 2 containers: [8dfd1e04a342 9dfaf3fd956f]
	I0923 12:01:31.418097    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:01:31.470155    3272 logs.go:276] 2 containers: [86d33ed9ed44 2e4fea7d2041]
	I0923 12:01:31.480778    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:01:31.523776    3272 logs.go:276] 2 containers: [8fb2248f24f8 f1a54f9ee3db]
	I0923 12:01:31.532784    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:01:31.584149    3272 logs.go:276] 0 containers: []
	W0923 12:01:31.584149    3272 logs.go:278] No container was found matching "kindnet"
	I0923 12:01:31.595346    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:01:31.642314    3272 logs.go:276] 1 containers: [2b7dc21ea030]
	I0923 12:01:31.655758    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:01:31.693308    3272 logs.go:276] 2 containers: [f26f0d83255a 434662eca49c]
	I0923 12:01:31.693308    3272 logs.go:123] Gathering logs for kube-apiserver [710a0ba13429] ...
	I0923 12:01:31.693308    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710a0ba13429"
	I0923 12:01:31.801618    3272 logs.go:123] Gathering logs for etcd [649f5bcaa4f1] ...
	I0923 12:01:31.801618    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5bcaa4f1"
	I0923 12:01:31.865922    3272 logs.go:123] Gathering logs for etcd [5fde0ebfccf6] ...
	I0923 12:01:31.865922    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fde0ebfccf6"
	I0923 12:01:31.924795    3272 logs.go:123] Gathering logs for kube-proxy [86d33ed9ed44] ...
	I0923 12:01:31.924795    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d33ed9ed44"
	I0923 12:01:31.972288    3272 logs.go:123] Gathering logs for kubernetes-dashboard [2b7dc21ea030] ...
	I0923 12:01:31.972384    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7dc21ea030"
	I0923 12:01:32.021434    3272 logs.go:123] Gathering logs for kubelet ...
	I0923 12:01:32.021434    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:01:32.104879    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:09 old-k8s-version-656000 kubelet[1893]: E0923 11:56:09.975569    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.106874    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:12 old-k8s-version-656000 kubelet[1893]: E0923 11:56:12.641934    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.107411    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:13 old-k8s-version-656000 kubelet[1893]: E0923 11:56:13.742174    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.111458    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:29 old-k8s-version-656000 kubelet[1893]: E0923 11:56:29.020726    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.113062    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:33 old-k8s-version-656000 kubelet[1893]: E0923 11:56:33.177742    1893 pod_workers.go:191] Error syncing pod 5ea83d08-b331-4bfa-995f-9856437c78ec ("storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"
	W0923 12:01:32.113465    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:43 old-k8s-version-656000 kubelet[1893]: E0923 11:56:43.963428    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.115459    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.122782    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.116026    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.777992    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.118208    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:58 old-k8s-version-656000 kubelet[1893]: E0923 11:56:58.985647    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:10 old-k8s-version-656000 kubelet[1893]: E0923 11:57:10.376499    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:12 old-k8s-version-656000 kubelet[1893]: E0923 11:57:12.902413    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.120520    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:20 old-k8s-version-656000 kubelet[1893]: E0923 11:57:20.903057    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.121513    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:27 old-k8s-version-656000 kubelet[1893]: E0923 11:57:27.903462    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.123493    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:32 old-k8s-version-656000 kubelet[1893]: E0923 11:57:32.470797    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:40 old-k8s-version-656000 kubelet[1893]: E0923 11:57:40.005343    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:46 old-k8s-version-656000 kubelet[1893]: E0923 11:57:46.898299    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.125506    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:50 old-k8s-version-656000 kubelet[1893]: E0923 11:57:50.897643    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:57 old-k8s-version-656000 kubelet[1893]: E0923 11:57:57.897696    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:02 old-k8s-version-656000 kubelet[1893]: E0923 11:58:02.898254    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:09 old-k8s-version-656000 kubelet[1893]: E0923 11:58:09.893616    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.126517    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:13 old-k8s-version-656000 kubelet[1893]: E0923 11:58:13.893937    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.128836    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:21 old-k8s-version-656000 kubelet[1893]: E0923 11:58:21.372831    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.128836    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:26 old-k8s-version-656000 kubelet[1893]: E0923 11:58:26.893634    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:33 old-k8s-version-656000 kubelet[1893]: E0923 11:58:33.905339    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:38 old-k8s-version-656000 kubelet[1893]: E0923 11:58:38.889353    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:47 old-k8s-version-656000 kubelet[1893]: E0923 11:58:47.889599    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:53 old-k8s-version-656000 kubelet[1893]: E0923 11:58:53.888891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.129820    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:59 old-k8s-version-656000 kubelet[1893]: E0923 11:58:59.889484    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.133525    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:05 old-k8s-version-656000 kubelet[1893]: E0923 11:59:05.927866    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:32.134048    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:13 old-k8s-version-656000 kubelet[1893]: E0923 11:59:13.886312    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134119    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:20 old-k8s-version-656000 kubelet[1893]: E0923 11:59:20.885891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134119    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:28 old-k8s-version-656000 kubelet[1893]: E0923 11:59:28.884672    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.134696    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:35 old-k8s-version-656000 kubelet[1893]: E0923 11:59:35.880315    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137088    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:42 old-k8s-version-656000 kubelet[1893]: E0923 11:59:42.382968    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:32.137150    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:50 old-k8s-version-656000 kubelet[1893]: E0923 11:59:50.879975    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137150    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:54 old-k8s-version-656000 kubelet[1893]: E0923 11:59:54.879656    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:01 old-k8s-version-656000 kubelet[1893]: E0923 12:00:01.880164    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:07 old-k8s-version-656000 kubelet[1893]: E0923 12:00:07.877117    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.137747    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:13 old-k8s-version-656000 kubelet[1893]: E0923 12:00:13.875328    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:20 old-k8s-version-656000 kubelet[1893]: E0923 12:00:20.875342    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:27 old-k8s-version-656000 kubelet[1893]: E0923 12:00:27.875407    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.138462    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:31 old-k8s-version-656000 kubelet[1893]: E0923 12:00:31.875521    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139093    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:39 old-k8s-version-656000 kubelet[1893]: E0923 12:00:39.871312    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139093    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:44 old-k8s-version-656000 kubelet[1893]: E0923 12:00:44.893061    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.139780    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:50 old-k8s-version-656000 kubelet[1893]: E0923 12:00:50.872230    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.140774    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:58 old-k8s-version-656000 kubelet[1893]: E0923 12:00:58.872565    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.140774    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.141819    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:32.142547    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0923 12:01:32.142547    3272 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:01:32.142547    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:01:32.369242    3272 logs.go:123] Gathering logs for container status ...
	I0923 12:01:32.369242    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:01:32.466277    3272 logs.go:123] Gathering logs for kube-scheduler [9dfaf3fd956f] ...
	I0923 12:01:32.466334    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfaf3fd956f"
	I0923 12:01:32.512671    3272 logs.go:123] Gathering logs for storage-provisioner [434662eca49c] ...
	I0923 12:01:32.512671    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 434662eca49c"
	I0923 12:01:32.558452    3272 logs.go:123] Gathering logs for kube-scheduler [8dfd1e04a342] ...
	I0923 12:01:32.558452    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfd1e04a342"
	I0923 12:01:32.601592    3272 logs.go:123] Gathering logs for kube-controller-manager [8fb2248f24f8] ...
	I0923 12:01:32.601734    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fb2248f24f8"
	I0923 12:01:32.668121    3272 logs.go:123] Gathering logs for kube-controller-manager [f1a54f9ee3db] ...
	I0923 12:01:32.668121    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a54f9ee3db"
	I0923 12:01:32.725860    3272 logs.go:123] Gathering logs for storage-provisioner [f26f0d83255a] ...
	I0923 12:01:32.725860    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26f0d83255a"
	I0923 12:01:32.774337    3272 logs.go:123] Gathering logs for Docker ...
	I0923 12:01:32.774337    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:01:32.820269    3272 logs.go:123] Gathering logs for dmesg ...
	I0923 12:01:32.820269    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:01:32.846438    3272 logs.go:123] Gathering logs for coredns [6a0ae0205b95] ...
	I0923 12:01:32.846507    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0ae0205b95"
	I0923 12:01:32.890288    3272 logs.go:123] Gathering logs for kube-proxy [2e4fea7d2041] ...
	I0923 12:01:32.890288    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e4fea7d2041"
	I0923 12:01:32.938699    3272 logs.go:123] Gathering logs for kube-apiserver [cf370ce76d59] ...
	I0923 12:01:32.938699    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf370ce76d59"
	I0923 12:01:32.999494    3272 logs.go:123] Gathering logs for coredns [cd2751a49ca4] ...
	I0923 12:01:32.999494    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2751a49ca4"
	I0923 12:01:33.053859    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:33.053940    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:01:33.054052    3272 out.go:270] X Problems detected in kubelet:
	W0923 12:01:33.054220    3272 out.go:270]   Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054304    3272 out.go:270]   Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:33.054348    3272 out.go:270]   Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0923 12:01:33.054469    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:33.054469    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:37.497959    5872 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-895600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (19.2106635s)
	I0923 12:01:37.497959    5872 kic.go:203] duration metric: took 19.2266779s to extract preloaded images to volume ...
	I0923 12:01:37.508522    5872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 12:01:37.832814    5872 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:82 SystemTime:2024-09-23 12:01:37.800964335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 12:01:37.845298    5872 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 12:01:38.177178    5872 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-895600 --name newest-cni-895600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-895600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-895600 --network newest-cni-895600 --ip 192.168.76.2 --volume newest-cni-895600:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 12:01:39.063361    5872 cli_runner.go:164] Run: docker container inspect newest-cni-895600 --format={{.State.Running}}
	I0923 12:01:39.152633    5872 cli_runner.go:164] Run: docker container inspect newest-cni-895600 --format={{.State.Status}}
	I0923 12:01:39.230842    5872 cli_runner.go:164] Run: docker exec newest-cni-895600 stat /var/lib/dpkg/alternatives/iptables
	I0923 12:01:39.380294    5872 oci.go:144] the created container "newest-cni-895600" has a running status.
	I0923 12:01:39.380294    5872 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa...
	I0923 12:01:39.622973    5872 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 12:01:39.753492    5872 cli_runner.go:164] Run: docker container inspect newest-cni-895600 --format={{.State.Status}}
	I0923 12:01:39.854880    5872 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 12:01:39.854880    5872 kic_runner.go:114] Args: [docker exec --privileged newest-cni-895600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 12:01:40.025707    5872 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa...
	I0923 12:01:42.586681    5872 cli_runner.go:164] Run: docker container inspect newest-cni-895600 --format={{.State.Status}}
	I0923 12:01:42.654787    5872 machine.go:93] provisionDockerMachine start ...
	I0923 12:01:42.662780    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:42.740908    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:42.752484    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:42.752484    5872 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 12:01:42.938084    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895600
	
	I0923 12:01:42.938084    5872 ubuntu.go:169] provisioning hostname "newest-cni-895600"
	I0923 12:01:42.950852    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:43.029177    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:43.029781    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:43.029781    5872 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-895600 && echo "newest-cni-895600" | sudo tee /etc/hostname
	I0923 12:01:43.255989    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-895600
	
	I0923 12:01:43.267722    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:43.073650    3272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 12:01:43.100453    3272 api_server.go:72] duration metric: took 5m53.5602541s to wait for apiserver process to appear ...
	I0923 12:01:43.101474    3272 api_server.go:88] waiting for apiserver healthz status ...
	I0923 12:01:43.113479    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0923 12:01:43.172147    3272 logs.go:276] 2 containers: [cf370ce76d59 710a0ba13429]
	I0923 12:01:43.181236    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0923 12:01:43.239975    3272 logs.go:276] 2 containers: [649f5bcaa4f1 5fde0ebfccf6]
	I0923 12:01:43.251563    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0923 12:01:43.305568    3272 logs.go:276] 2 containers: [6a0ae0205b95 cd2751a49ca4]
	I0923 12:01:43.316088    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0923 12:01:43.372822    3272 logs.go:276] 2 containers: [8dfd1e04a342 9dfaf3fd956f]
	I0923 12:01:43.385106    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0923 12:01:43.436989    3272 logs.go:276] 2 containers: [86d33ed9ed44 2e4fea7d2041]
	I0923 12:01:43.449318    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0923 12:01:43.502053    3272 logs.go:276] 2 containers: [8fb2248f24f8 f1a54f9ee3db]
	I0923 12:01:43.515033    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0923 12:01:43.564197    3272 logs.go:276] 0 containers: []
	W0923 12:01:43.564247    3272 logs.go:278] No container was found matching "kindnet"
	I0923 12:01:43.574435    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0923 12:01:43.624271    3272 logs.go:276] 1 containers: [2b7dc21ea030]
	I0923 12:01:43.636549    3272 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0923 12:01:43.678428    3272 logs.go:276] 2 containers: [f26f0d83255a 434662eca49c]
	I0923 12:01:43.678428    3272 logs.go:123] Gathering logs for kube-proxy [2e4fea7d2041] ...
	I0923 12:01:43.678428    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e4fea7d2041"
	I0923 12:01:43.731007    3272 logs.go:123] Gathering logs for kube-controller-manager [8fb2248f24f8] ...
	I0923 12:01:43.731060    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fb2248f24f8"
	I0923 12:01:43.802681    3272 logs.go:123] Gathering logs for kube-controller-manager [f1a54f9ee3db] ...
	I0923 12:01:43.802681    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f1a54f9ee3db"
	I0923 12:01:43.871273    3272 logs.go:123] Gathering logs for kube-apiserver [cf370ce76d59] ...
	I0923 12:01:43.871273    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf370ce76d59"
	I0923 12:01:43.952941    3272 logs.go:123] Gathering logs for etcd [649f5bcaa4f1] ...
	I0923 12:01:43.952941    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 649f5bcaa4f1"
	I0923 12:01:44.014140    3272 logs.go:123] Gathering logs for coredns [6a0ae0205b95] ...
	I0923 12:01:44.014140    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a0ae0205b95"
	I0923 12:01:44.065195    3272 logs.go:123] Gathering logs for kube-scheduler [9dfaf3fd956f] ...
	I0923 12:01:44.065261    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dfaf3fd956f"
	I0923 12:01:44.127720    3272 logs.go:123] Gathering logs for storage-provisioner [434662eca49c] ...
	I0923 12:01:44.127720    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 434662eca49c"
	I0923 12:01:44.175006    3272 logs.go:123] Gathering logs for kubelet ...
	I0923 12:01:44.175067    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 12:01:44.259142    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:09 old-k8s-version-656000 kubelet[1893]: E0923 11:56:09.975569    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.261214    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:12 old-k8s-version-656000 kubelet[1893]: E0923 11:56:12.641934    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.262652    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:13 old-k8s-version-656000 kubelet[1893]: E0923 11:56:13.742174    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.266764    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:29 old-k8s-version-656000 kubelet[1893]: E0923 11:56:29.020726    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.268445    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:33 old-k8s-version-656000 kubelet[1893]: E0923 11:56:33.177742    1893 pod_workers.go:191] Error syncing pod 5ea83d08-b331-4bfa-995f-9856437c78ec ("storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5ea83d08-b331-4bfa-995f-9856437c78ec)"
	W0923 12:01:44.269054    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:43 old-k8s-version-656000 kubelet[1893]: E0923 11:56:43.963428    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.270875    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.122782    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.271795    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:56 old-k8s-version-656000 kubelet[1893]: E0923 11:56:56.777992    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.273534    3272 logs.go:138] Found kubelet problem: Sep 23 11:56:58 old-k8s-version-656000 kubelet[1893]: E0923 11:56:58.985647    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.276272    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:10 old-k8s-version-656000 kubelet[1893]: E0923 11:57:10.376499    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.276307    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:12 old-k8s-version-656000 kubelet[1893]: E0923 11:57:12.902413    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.276307    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:20 old-k8s-version-656000 kubelet[1893]: E0923 11:57:20.903057    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.276843    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:27 old-k8s-version-656000 kubelet[1893]: E0923 11:57:27.903462    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.279476    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:32 old-k8s-version-656000 kubelet[1893]: E0923 11:57:32.470797    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.282496    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:40 old-k8s-version-656000 kubelet[1893]: E0923 11:57:40.005343    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.282593    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:46 old-k8s-version-656000 kubelet[1893]: E0923 11:57:46.898299    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.282593    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:50 old-k8s-version-656000 kubelet[1893]: E0923 11:57:50.897643    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283350    3272 logs.go:138] Found kubelet problem: Sep 23 11:57:57 old-k8s-version-656000 kubelet[1893]: E0923 11:57:57.897696    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283350    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:02 old-k8s-version-656000 kubelet[1893]: E0923 11:58:02.898254    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283983    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:09 old-k8s-version-656000 kubelet[1893]: E0923 11:58:09.893616    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.283983    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:13 old-k8s-version-656000 kubelet[1893]: E0923 11:58:13.893937    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:21 old-k8s-version-656000 kubelet[1893]: E0923 11:58:21.372831    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:26 old-k8s-version-656000 kubelet[1893]: E0923 11:58:26.893634    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:33 old-k8s-version-656000 kubelet[1893]: E0923 11:58:33.905339    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.286557    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:38 old-k8s-version-656000 kubelet[1893]: E0923 11:58:38.889353    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:47 old-k8s-version-656000 kubelet[1893]: E0923 11:58:47.889599    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:53 old-k8s-version-656000 kubelet[1893]: E0923 11:58:53.888891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.287550    3272 logs.go:138] Found kubelet problem: Sep 23 11:58:59 old-k8s-version-656000 kubelet[1893]: E0923 11:58:59.889484    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:05 old-k8s-version-656000 kubelet[1893]: E0923 11:59:05.927866    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:13 old-k8s-version-656000 kubelet[1893]: E0923 11:59:13.886312    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.289546    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:20 old-k8s-version-656000 kubelet[1893]: E0923 11:59:20.885891    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.290548    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:28 old-k8s-version-656000 kubelet[1893]: E0923 11:59:28.884672    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.290548    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:35 old-k8s-version-656000 kubelet[1893]: E0923 11:59:35.880315    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:42 old-k8s-version-656000 kubelet[1893]: E0923 11:59:42.382968    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:50 old-k8s-version-656000 kubelet[1893]: E0923 11:59:50.879975    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.293558    3272 logs.go:138] Found kubelet problem: Sep 23 11:59:54 old-k8s-version-656000 kubelet[1893]: E0923 11:59:54.879656    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:01 old-k8s-version-656000 kubelet[1893]: E0923 12:00:01.880164    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:07 old-k8s-version-656000 kubelet[1893]: E0923 12:00:07.877117    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:13 old-k8s-version-656000 kubelet[1893]: E0923 12:00:13.875328    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:20 old-k8s-version-656000 kubelet[1893]: E0923 12:00:20.875342    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.294549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:27 old-k8s-version-656000 kubelet[1893]: E0923 12:00:27.875407    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:31 old-k8s-version-656000 kubelet[1893]: E0923 12:00:31.875521    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:39 old-k8s-version-656000 kubelet[1893]: E0923 12:00:39.871312    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.295549    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:44 old-k8s-version-656000 kubelet[1893]: E0923 12:00:44.893061    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:50 old-k8s-version-656000 kubelet[1893]: E0923 12:00:50.872230    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:00:58 old-k8s-version-656000 kubelet[1893]: E0923 12:00:58.872565    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.296557    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:44.297549    3272 logs.go:138] Found kubelet problem: Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:01:44.297549    3272 logs.go:123] Gathering logs for describe nodes ...
	I0923 12:01:44.297549    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 12:01:44.506001    3272 logs.go:123] Gathering logs for etcd [5fde0ebfccf6] ...
	I0923 12:01:44.506111    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5fde0ebfccf6"
	I0923 12:01:44.575903    3272 logs.go:123] Gathering logs for coredns [cd2751a49ca4] ...
	I0923 12:01:44.575903    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd2751a49ca4"
	I0923 12:01:44.631527    3272 logs.go:123] Gathering logs for kube-scheduler [8dfd1e04a342] ...
	I0923 12:01:44.631575    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8dfd1e04a342"
	I0923 12:01:44.690320    3272 logs.go:123] Gathering logs for kube-proxy [86d33ed9ed44] ...
	I0923 12:01:44.690386    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86d33ed9ed44"
	I0923 12:01:44.750284    3272 logs.go:123] Gathering logs for kubernetes-dashboard [2b7dc21ea030] ...
	I0923 12:01:44.750357    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b7dc21ea030"
	I0923 12:01:44.801681    3272 logs.go:123] Gathering logs for dmesg ...
	I0923 12:01:44.801681    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 12:01:44.829673    3272 logs.go:123] Gathering logs for kube-apiserver [710a0ba13429] ...
	I0923 12:01:44.829673    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 710a0ba13429"
	I0923 12:01:44.938481    3272 logs.go:123] Gathering logs for storage-provisioner [f26f0d83255a] ...
	I0923 12:01:44.938481    3272 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26f0d83255a"
	I0923 12:01:44.988050    3272 logs.go:123] Gathering logs for Docker ...
	I0923 12:01:44.988100    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0923 12:01:45.046323    3272 logs.go:123] Gathering logs for container status ...
	I0923 12:01:45.046323    3272 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 12:01:45.151567    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:45.152098    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 12:01:45.152220    3272 out.go:270] X Problems detected in kubelet:
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0923 12:01:45.152247    3272 out.go:270]   Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0923 12:01:45.152247    3272 out.go:358] Setting ErrFile to fd 1728...
	I0923 12:01:45.152247    3272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:01:43.349774    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:43.349774    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:43.349774    5872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-895600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-895600/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-895600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 12:01:43.559284    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 12:01:43.559284    5872 ubuntu.go:175] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I0923 12:01:43.559284    5872 ubuntu.go:177] setting up certificates
	I0923 12:01:43.559284    5872 provision.go:84] configureAuth start
	I0923 12:01:43.573663    5872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895600
	I0923 12:01:43.655416    5872 provision.go:143] copyHostCerts
	I0923 12:01:43.656415    5872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I0923 12:01:43.656415    5872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I0923 12:01:43.656415    5872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0923 12:01:43.657425    5872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I0923 12:01:43.658420    5872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I0923 12:01:43.658420    5872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0923 12:01:43.659427    5872 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I0923 12:01:43.659427    5872 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I0923 12:01:43.659427    5872 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1679 bytes)
	I0923 12:01:43.659427    5872 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-895600 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-895600]
	I0923 12:01:44.235355    5872 provision.go:177] copyRemoteCerts
	I0923 12:01:44.246939    5872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 12:01:44.255424    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:44.329561    5872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63909 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa Username:docker}
	I0923 12:01:44.467782    5872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 12:01:44.533924    5872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0923 12:01:44.595594    5872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 12:01:44.648683    5872 provision.go:87] duration metric: took 1.0893469s to configureAuth
	I0923 12:01:44.648775    5872 ubuntu.go:193] setting minikube options for container-runtime
	I0923 12:01:44.649449    5872 config.go:182] Loaded profile config "newest-cni-895600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:01:44.659283    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:44.749559    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:44.749625    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:44.749625    5872 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 12:01:44.940198    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 12:01:44.940198    5872 ubuntu.go:71] root file system type: overlay
	I0923 12:01:44.940198    5872 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 12:01:44.952806    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:45.036320    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:45.036320    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:45.036320    5872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 12:01:45.261888    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 12:01:45.271360    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:45.348419    5872 main.go:141] libmachine: Using SSH client type: native
	I0923 12:01:45.349389    5872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x761bc0] 0x764700 <nil>  [] 0s} 127.0.0.1 63909 <nil> <nil>}
	I0923 12:01:45.349389    5872 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 12:01:46.879938    5872 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 12:01:45.252691105 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 12:01:46.880025    5872 machine.go:96] duration metric: took 4.225038s to provisionDockerMachine
	I0923 12:01:46.880025    5872 client.go:171] duration metric: took 32.2510208s to LocalClient.Create
	I0923 12:01:46.880127    5872 start.go:167] duration metric: took 32.2511226s to libmachine.API.Create "newest-cni-895600"
	I0923 12:01:46.880127    5872 start.go:293] postStartSetup for "newest-cni-895600" (driver="docker")
	I0923 12:01:46.880127    5872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 12:01:46.893120    5872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 12:01:46.903288    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:47.003965    5872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63909 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa Username:docker}
	I0923 12:01:47.161364    5872 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 12:01:47.177679    5872 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 12:01:47.177679    5872 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 12:01:47.177679    5872 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 12:01:47.177679    5872 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 12:01:47.177679    5872 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I0923 12:01:47.177679    5872 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I0923 12:01:47.179180    5872 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem -> 43162.pem in /etc/ssl/certs
	I0923 12:01:47.196146    5872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0923 12:01:47.217297    5872 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\43162.pem --> /etc/ssl/certs/43162.pem (1708 bytes)
	I0923 12:01:47.267022    5872 start.go:296] duration metric: took 386.877ms for postStartSetup
	I0923 12:01:47.278967    5872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895600
	I0923 12:01:47.356010    5872 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600\config.json ...
	I0923 12:01:47.369899    5872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 12:01:47.377874    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:47.460309    5872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63909 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa Username:docker}
	I0923 12:01:47.604292    5872 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 12:01:47.617770    5872 start.go:128] duration metric: took 32.9967345s to createHost
	I0923 12:01:47.617770    5872 start.go:83] releasing machines lock for "newest-cni-895600", held for 32.9977334s
	I0923 12:01:47.628082    5872 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-895600
	I0923 12:01:47.698146    5872 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0923 12:01:47.709305    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:47.711370    5872 ssh_runner.go:195] Run: cat /version.json
	I0923 12:01:47.720315    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:47.782096    5872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63909 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa Username:docker}
	I0923 12:01:47.790086    5872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63909 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-895600\id_rsa Username:docker}
	W0923 12:01:47.906365    5872 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0923 12:01:47.925273    5872 ssh_runner.go:195] Run: systemctl --version
	I0923 12:01:47.964775    5872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 12:01:47.994036    5872 ssh_runner.go:195] Run: sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	W0923 12:01:48.016892    5872 start.go:439] unable to name loopback interface in configureRuntimes: unable to patch loopback cni config "/etc/cni/net.d/*loopback.conf*": sudo find \etc\cni\net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;: Process exited with status 1
	stdout:
	
	stderr:
	find: '\\etc\\cni\\net.d': No such file or directory
	I0923 12:01:48.029536    5872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0923 12:01:48.033127    5872 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W0923 12:01:48.033127    5872 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0923 12:01:48.109623    5872 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0923 12:01:48.109623    5872 start.go:495] detecting cgroup driver to use...
	I0923 12:01:48.109781    5872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:01:48.109862    5872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:01:48.159308    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 12:01:48.196206    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 12:01:48.218809    5872 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 12:01:48.229698    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 12:01:48.266834    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:01:48.302729    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 12:01:48.336718    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 12:01:48.375405    5872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 12:01:48.407255    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 12:01:48.443105    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 12:01:48.482690    5872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 12:01:48.517667    5872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 12:01:48.550541    5872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 12:01:48.584196    5872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:01:48.733196    5872 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 12:01:48.968030    5872 start.go:495] detecting cgroup driver to use...
	I0923 12:01:48.968154    5872 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 12:01:48.984218    5872 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 12:01:49.012457    5872 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 12:01:49.025414    5872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 12:01:49.051597    5872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 12:01:49.112933    5872 ssh_runner.go:195] Run: which cri-dockerd
	I0923 12:01:49.146308    5872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 12:01:49.169430    5872 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 12:01:49.217320    5872 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 12:01:49.403522    5872 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 12:01:49.547225    5872 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 12:01:49.547342    5872 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 12:01:49.594718    5872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:01:49.747558    5872 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 12:01:50.753956    5872 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0063502s)
	I0923 12:01:50.766813    5872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 12:01:50.805664    5872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:01:50.844456    5872 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 12:01:51.003651    5872 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 12:01:51.174177    5872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:01:51.343713    5872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 12:01:51.385257    5872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 12:01:51.423575    5872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:01:51.593594    5872 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 12:01:51.741631    5872 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 12:01:51.754836    5872 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 12:01:51.767670    5872 start.go:563] Will wait 60s for crictl version
	I0923 12:01:51.779520    5872 ssh_runner.go:195] Run: which crictl
	I0923 12:01:51.804591    5872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 12:01:51.882752    5872 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0923 12:01:51.893728    5872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:01:51.961910    5872 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 12:01:52.018399    5872 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0923 12:01:52.026074    5872 cli_runner.go:164] Run: docker exec -t newest-cni-895600 dig +short host.docker.internal
	I0923 12:01:52.223668    5872 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0923 12:01:52.235270    5872 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0923 12:01:52.251526    5872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:01:52.285323    5872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-895600
	I0923 12:01:52.369969    5872 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0923 12:01:52.373619    5872 kubeadm.go:883] updating cluster {Name:newest-cni-895600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-895600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 12:01:52.373619    5872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 12:01:52.383980    5872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:01:52.434198    5872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:01:52.434198    5872 docker.go:615] Images already preloaded, skipping extraction
	I0923 12:01:52.444755    5872 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 12:01:52.489485    5872 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 12:01:52.489485    5872 cache_images.go:84] Images are preloaded, skipping loading
	I0923 12:01:52.489485    5872 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 docker true true} ...
	I0923 12:01:52.490243    5872 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-895600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-895600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 12:01:52.499376    5872 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 12:01:52.585686    5872 cni.go:84] Creating CNI manager for ""
	I0923 12:01:52.585686    5872 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 12:01:52.585686    5872 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0923 12:01:52.585686    5872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-895600 NodeName:newest-cni-895600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 12:01:52.585686    5872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-895600"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 12:01:52.599310    5872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 12:01:52.620226    5872 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 12:01:52.631524    5872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 12:01:52.652473    5872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0923 12:01:52.691316    5872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 12:01:52.726065    5872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2283 bytes)
	I0923 12:01:52.773283    5872 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0923 12:01:52.786139    5872 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 12:01:52.829175    5872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 12:01:52.993898    5872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 12:01:53.024169    5872 certs.go:68] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600 for IP: 192.168.76.2
	I0923 12:01:53.024239    5872 certs.go:194] generating shared ca certs ...
	I0923 12:01:53.024278    5872 certs.go:226] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 12:01:53.024951    5872 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I0923 12:01:53.025335    5872 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I0923 12:01:53.025452    5872 certs.go:256] generating profile certs ...
	I0923 12:01:53.026118    5872 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600\client.key
	I0923 12:01:53.026376    5872 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-895600\client.crt with IP's: []
	I0923 12:01:55.154399    3272 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63423/healthz ...
	I0923 12:01:55.172691    3272 api_server.go:279] https://127.0.0.1:63423/healthz returned 200:
	ok
	I0923 12:01:55.176144    3272 out.go:201] 
	W0923 12:01:55.178225    3272 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0923 12:01:55.178225    3272 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0923 12:01:55.178225    3272 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0923 12:01:55.178225    3272 out.go:270] * 
	W0923 12:01:55.179555    3272 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0923 12:01:55.182609    3272 out.go:201] 
	
	
	==> Docker <==
	Sep 23 11:57:10 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:10.167237041Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=8debfa41eff412ac traceID=8da05944c8f7ebd927798d8f86451ae4
	Sep 23 11:57:10 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:10.359436485Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=8debfa41eff412ac traceID=8da05944c8f7ebd927798d8f86451ae4
	Sep 23 11:57:10 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:10.359626323Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=8debfa41eff412ac traceID=8da05944c8f7ebd927798d8f86451ae4
	Sep 23 11:57:10 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:10.359692836Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=8debfa41eff412ac traceID=8da05944c8f7ebd927798d8f86451ae4
	Sep 23 11:57:32 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:32.215588334Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=a0cf535ae64b2942 traceID=f325acbabec7651cc6ad3267424ced77
	Sep 23 11:57:32 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:32.462767116Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=a0cf535ae64b2942 traceID=f325acbabec7651cc6ad3267424ced77
	Sep 23 11:57:32 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:32.462941750Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=a0cf535ae64b2942 traceID=f325acbabec7651cc6ad3267424ced77
	Sep 23 11:57:32 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:32.462985559Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=a0cf535ae64b2942 traceID=f325acbabec7651cc6ad3267424ced77
	Sep 23 11:57:39 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:39.996073493Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=fee4f90a82edcc9a traceID=b6d59514f2801f48c6c601f4d53f0582
	Sep 23 11:57:39 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:39.996254429Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=fee4f90a82edcc9a traceID=b6d59514f2801f48c6c601f4d53f0582
	Sep 23 11:57:40 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:57:40.003733220Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=fee4f90a82edcc9a traceID=b6d59514f2801f48c6c601f4d53f0582
	Sep 23 11:58:21 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:58:21.134709804Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=6232e5bdfc9cc2e4 traceID=6d92deafff5bdcd43682436c79832d89
	Sep 23 11:58:21 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:58:21.348747558Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=6232e5bdfc9cc2e4 traceID=6d92deafff5bdcd43682436c79832d89
	Sep 23 11:58:21 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:58:21.349014811Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=6232e5bdfc9cc2e4 traceID=6d92deafff5bdcd43682436c79832d89
	Sep 23 11:58:21 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:58:21.349059020Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=6232e5bdfc9cc2e4 traceID=6d92deafff5bdcd43682436c79832d89
	Sep 23 11:59:05 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:05.918057211Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=3c510689f4df8ee2 traceID=d41caa55212d4596c5de18bd83b7688c
	Sep 23 11:59:05 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:05.918191937Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=3c510689f4df8ee2 traceID=d41caa55212d4596c5de18bd83b7688c
	Sep 23 11:59:05 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:05.926280018Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=3c510689f4df8ee2 traceID=d41caa55212d4596c5de18bd83b7688c
	Sep 23 11:59:42 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:42.176948890Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=f319de6b51c8491d traceID=1cd3a867258d99a792d7184d38a7d473
	Sep 23 11:59:42 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:42.373863885Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4" spanID=f319de6b51c8491d traceID=1cd3a867258d99a792d7184d38a7d473
	Sep 23 11:59:42 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:42.374125936Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4" spanID=f319de6b51c8491d traceID=1cd3a867258d99a792d7184d38a7d473
	Sep 23 11:59:42 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T11:59:42.374172545Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" spanID=f319de6b51c8491d traceID=1cd3a867258d99a792d7184d38a7d473
	Sep 23 12:01:48 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T12:01:48.921493254Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=523603c6353e2fcf traceID=fad00874e1a2eceed446d50b1f089110
	Sep 23 12:01:48 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T12:01:48.921737901Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=523603c6353e2fcf traceID=fad00874e1a2eceed446d50b1f089110
	Sep 23 12:01:48 old-k8s-version-656000 dockerd[1454]: time="2024-09-23T12:01:48.932267638Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host" spanID=523603c6353e2fcf traceID=fad00874e1a2eceed446d50b1f089110
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b7dc21ea0309       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   e69757ada81eb       kubernetes-dashboard-cd95d586-j8m2d
	f26f0d83255af       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   f07e893f77bfb       storage-provisioner
	6a0ae0205b95b       bfe3a36ebd252                                                                                         5 minutes ago       Running             coredns                   1                   d9f34d7612f67       coredns-74ff55c5b-fvz5d
	434662eca49c2       6e38f40d628db                                                                                         5 minutes ago       Exited              storage-provisioner       1                   f07e893f77bfb       storage-provisioner
	a790d448812d4       56cc512116c8f                                                                                         5 minutes ago       Running             busybox                   1                   cc893107ca189       busybox
	86d33ed9ed44a       10cc881966cfd                                                                                         5 minutes ago       Running             kube-proxy                1                   5d83ddd559546       kube-proxy-mk6ch
	8fb2248f24f87       b9fa1895dcaa6                                                                                         6 minutes ago       Running             kube-controller-manager   1                   923254980ffff       kube-controller-manager-old-k8s-version-656000
	cf370ce76d594       ca9843d3b5454                                                                                         6 minutes ago       Running             kube-apiserver            1                   44bc6aa5b6a77       kube-apiserver-old-k8s-version-656000
	649f5bcaa4f19       0369cf4303ffd                                                                                         6 minutes ago       Running             etcd                      1                   e5e0dd18d4471       etcd-old-k8s-version-656000
	8dfd1e04a3429       3138b6e3d4712                                                                                         6 minutes ago       Running             kube-scheduler            1                   83522dcfc7623       kube-scheduler-old-k8s-version-656000
	9bd47b869baf5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              busybox                   0                   f7443debd65e9       busybox
	cd2751a49ca4f       bfe3a36ebd252                                                                                         8 minutes ago       Exited              coredns                   0                   785ba93d84abd       coredns-74ff55c5b-fvz5d
	2e4fea7d2041e       10cc881966cfd                                                                                         9 minutes ago       Exited              kube-proxy                0                   56876cb1d05ae       kube-proxy-mk6ch
	710a0ba134295       ca9843d3b5454                                                                                         9 minutes ago       Exited              kube-apiserver            0                   09ec3c01c42e7       kube-apiserver-old-k8s-version-656000
	f1a54f9ee3db1       b9fa1895dcaa6                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   65333c3e3f3ff       kube-controller-manager-old-k8s-version-656000
	9dfaf3fd956f0       3138b6e3d4712                                                                                         9 minutes ago       Exited              kube-scheduler            0                   e5491fab6166a       kube-scheduler-old-k8s-version-656000
	5fde0ebfccf6e       0369cf4303ffd                                                                                         9 minutes ago       Exited              etcd                      0                   3115043b72fd0       etcd-old-k8s-version-656000
	
	
	==> coredns [6a0ae0205b95] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33030 - 41313 "HINFO IN 8846778941774469272.8808957937993046354. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.072737355s
	
	
	==> coredns [cd2751a49ca4] <==
	I0923 11:53:21.022456       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 11:52:59.955572003 +0000 UTC m=+0.054661407) (total time: 21.070513646s):
	Trace[2019727887]: [21.070513646s] [21.070513646s] END
	E0923 11:53:21.022563       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 11:53:21.068790       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 11:52:59.998794232 +0000 UTC m=+0.097883736) (total time: 21.073741278s):
	Trace[939984059]: [21.073741278s] [21.073741278s] END
	E0923 11:53:21.068828       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0923 11:53:21.069027       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-23 11:52:59.998824138 +0000 UTC m=+0.097913642) (total time: 21.073885605s):
	Trace[1474941318]: [21.073885605s] [21.073885605s] END
	E0923 11:53:21.069042       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 512bc0e06a520fa44f35dc15de10fdd6
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-656000
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-656000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=old-k8s-version-656000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_52_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:52:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-656000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:01:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 11:57:26 +0000   Mon, 23 Sep 2024 11:52:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 11:57:26 +0000   Mon, 23 Sep 2024 11:52:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 11:57:26 +0000   Mon, 23 Sep 2024 11:52:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 11:57:26 +0000   Mon, 23 Sep 2024 11:52:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-656000
	Capacity:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	Allocatable:
	  cpu:                16
	  ephemeral-storage:  1055762868Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32868688Ki
	  pods:               110
	System Info:
	  Machine ID:                 c46e3db0e5794553aaf921a272f5d7e0
	  System UUID:                c46e3db0e5794553aaf921a272f5d7e0
	  Boot ID:                    d450b61c-b7f5-4a84-8b7a-3c24688adc16
	  Kernel Version:             5.15.153.1-microsoft-standard-WSL2
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 coredns-74ff55c5b-fvz5d                           100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m5s
	  kube-system                 etcd-old-k8s-version-656000                       100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-old-k8s-version-656000             250m (1%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-old-k8s-version-656000    200m (1%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-mk6ch                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
	  kube-system                 kube-scheduler-old-k8s-version-656000             100m (0%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-9975d5f86-5pvv2                    100m (0%)     0 (0%)      200Mi (0%)       0 (0%)         7m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-hfqtz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-j8m2d               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (5%)   0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  9m39s (x6 over 9m39s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m38s (x7 over 9m39s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m38s (x7 over 9m39s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m20s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet     Node old-k8s-version-656000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s                  kubelet     Node old-k8s-version-656000 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m9s                   kubelet     Node old-k8s-version-656000 status is now: NodeReady
	  Normal  Starting                 8m59s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet     Node old-k8s-version-656000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep23 11:47] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:48] tmpfs: Unknown parameter 'noswap'
	[  +6.797756] tmpfs: Unknown parameter 'noswap'
	[ +10.694800] tmpfs: Unknown parameter 'noswap'
	[ +27.822744] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:49] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:50] tmpfs: Unknown parameter 'noswap'
	[  +6.216301] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:51] tmpfs: Unknown parameter 'noswap'
	[  +9.555719] tmpfs: Unknown parameter 'noswap'
	[  +0.015008] tmpfs: Unknown parameter 'noswap'
	[ +14.734015] tmpfs: Unknown parameter 'noswap'
	[Sep23 11:54] tmpfs: Unknown parameter 'noswap'
	[  +2.172321] tmpfs: Unknown parameter 'noswap'
	[  +1.550163] tmpfs: Unknown parameter 'noswap'
	[  +9.810946] tmpfs: Unknown parameter 'noswap'
	[  +0.159418] tmpfs: Unknown parameter 'noswap'
	[  +2.739937] tmpfs: Unknown parameter 'noswap'
	[  +8.821368] hrtimer: interrupt took 5952089 ns
	[Sep23 11:56] tmpfs: Unknown parameter 'noswap'
	[  +1.124231] tmpfs: Unknown parameter 'noswap'
	[  +7.163317] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:00] tmpfs: Unknown parameter 'noswap'
	[  +3.989108] tmpfs: Unknown parameter 'noswap'
	[Sep23 12:01] tmpfs: Unknown parameter 'noswap'
	
	
	==> etcd [5fde0ebfccf6] <==
	2024-09-23 11:53:59.210127 W | etcdserver: read-only range request "key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true " with result "range_response_count:0 size:5" took too long (158.135517ms) to execute
	2024-09-23 11:53:59.305324 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:644" took too long (199.451007ms) to execute
	2024-09-23 11:53:59.510488 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (294.373405ms) to execute
	2024-09-23 11:53:59.510894 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/kube-apiserver-old-k8s-version-656000.17f7dd6aba41936a\" " with result "range_response_count:1 size:851" took too long (291.53307ms) to execute
	2024-09-23 11:53:59.511041 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-656000\" " with result "range_response_count:1 size:7425" took too long (288.879369ms) to execute
	2024-09-23 11:53:59.511255 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-656000\" " with result "range_response_count:1 size:5242" took too long (287.638635ms) to execute
	2024-09-23 11:53:59.511325 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.103.2\" " with result "range_response_count:1 size:135" took too long (201.670826ms) to execute
	2024-09-23 11:53:59.794573 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (215.712774ms) to execute
	2024-09-23 11:53:59.811065 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-656000\" " with result "range_response_count:1 size:5242" took too long (221.433452ms) to execute
	2024-09-23 11:53:59.811266 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-656000\" " with result "range_response_count:1 size:5265" took too long (228.221532ms) to execute
	2024-09-23 11:53:59.811369 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/default/kubernetes\" " with result "range_response_count:1 size:421" took too long (229.716114ms) to execute
	2024-09-23 11:53:59.964665 W | etcdserver: read-only range request "key:\"/registry/endpointslices/default/kubernetes\" " with result "range_response_count:1 size:485" took too long (144.644873ms) to execute
	2024-09-23 11:54:00.010651 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-mk6ch\" " with result "range_response_count:1 size:4590" took too long (189.116159ms) to execute
	2024-09-23 11:54:02.105745 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-656000\" " with result "range_response_count:1 size:4267" took too long (537.598166ms) to execute
	2024-09-23 11:54:02.106119 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (526.48177ms) to execute
	2024-09-23 11:54:05.092920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:15.091304 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:25.092156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:35.090794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:45.088134 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:55.089052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 11:54:58.979911 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/09/23 11:54:58 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2024/09/23 11:54:58 grpc: addrConn.createTransport failed to connect to {192.168.103.2:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.103.2:2379: operation was canceled". Reconnecting...
	2024-09-23 11:54:59.079601 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> etcd [649f5bcaa4f1] <==
	2024-09-23 12:00:10.331620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:00:20.332965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:00:30.331965 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:00:40.329165 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:00:50.328726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:00.328331 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:10.327123 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:16.685817 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-5pvv2\" " with result "range_response_count:1 size:4053" took too long (501.432082ms) to execute
	2024-09-23 12:01:16.690904 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (301.162062ms) to execute
	2024-09-23 12:01:16.691625 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (293.318148ms) to execute
	2024-09-23 12:01:20.323996 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:24.124689 W | wal: sync duration of 1.161539412s, expected less than 1s
	2024-09-23 12:01:24.125445 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-9975d5f86-5pvv2\" " with result "range_response_count:1 size:4053" took too long (940.192161ms) to execute
	2024-09-23 12:01:24.125891 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (738.870305ms) to execute
	2024-09-23 12:01:24.126284 W | etcdserver: read-only range request "key:\"/registry/mutatingwebhookconfigurations/\" range_end:\"/registry/mutatingwebhookconfigurations0\" count_only:true " with result "range_response_count:0 size:5" took too long (1.340622443s) to execute
	2024-09-23 12:01:26.961269 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (146.11761ms) to execute
	2024-09-23 12:01:30.323711 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:35.578689 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (191.382642ms) to execute
	2024-09-23 12:01:36.772863 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (389.402258ms) to execute
	2024-09-23 12:01:36.773127 W | etcdserver: read-only range request "key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true " with result "range_response_count:0 size:5" took too long (515.763617ms) to execute
	2024-09-23 12:01:37.213321 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:1121" took too long (146.581742ms) to execute
	2024-09-23 12:01:37.213478 W | etcdserver: read-only range request "key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true " with result "range_response_count:0 size:5" took too long (268.316212ms) to execute
	2024-09-23 12:01:37.213608 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-8d5bb5db8-hfqtz.17f7dd959d3f994e\" " with result "range_response_count:1 size:921" took too long (340.586887ms) to execute
	2024-09-23 12:01:40.320835 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-23 12:01:50.320171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 12:01:58 up 14:50,  0 users,  load average: 6.86, 6.33, 6.31
	Linux old-k8s-version-656000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [710a0ba13429] <==
	W0923 11:55:08.437825       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.537246       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.545977       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.573247       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.575081       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.605929       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.617564       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.635258       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.699932       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.700605       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.707942       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.800073       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.807358       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.810296       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.818355       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.831790       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.858241       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.889897       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.909702       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.932170       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:08.946331       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:09.010255       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:09.013661       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:09.016279       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0923 11:55:09.073168       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [cf370ce76d59] <==
	E0923 11:59:11.674967       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 11:59:11.674980       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 11:59:39.706927       1 client.go:360] parsed scheme: "passthrough"
	I0923 11:59:39.707047       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 11:59:39.707059       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:00:15.304299       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:00:15.304411       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:00:15.304423       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0923 12:00:56.299329       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:00:56.299463       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:00:56.299475       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0923 12:01:05.237545       1 handler_proxy.go:102] no RequestInfo found in the context
	E0923 12:01:05.237832       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0923 12:01:05.237878       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0923 12:01:16.688230       1 trace.go:205] Trace[619912272]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-5pvv2,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.103.1 (23-Sep-2024 12:01:16.182) (total time: 505ms):
	Trace[619912272]: ---"About to write a response" 504ms (12:01:00.687)
	Trace[619912272]: [505.193912ms] [505.193912ms] END
	I0923 12:01:24.128289       1 trace.go:205] Trace[1419115515]: "Get" url:/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-5pvv2,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,client:192.168.103.1 (23-Sep-2024 12:01:23.184) (total time: 943ms):
	Trace[1419115515]: ---"About to write a response" 942ms (12:01:00.126)
	Trace[1419115515]: [943.971296ms] [943.971296ms] END
	I0923 12:01:30.448013       1 client.go:360] parsed scheme: "passthrough"
	I0923 12:01:30.448135       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0923 12:01:30.448147       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8fb2248f24f8] <==
	W0923 11:57:32.615373       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 11:57:59.211799       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 11:58:04.261974       1 request.go:655] Throttling request took 1.048008245s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0923 11:58:05.114160       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 11:58:29.710812       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 11:58:36.756752       1 request.go:655] Throttling request took 1.048205624s, request: GET:https://192.168.103.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0923 11:58:37.608786       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 11:59:00.208735       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 11:59:09.256173       1 request.go:655] Throttling request took 1.048243287s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0923 11:59:10.108022       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 11:59:30.708856       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 11:59:41.753959       1 request.go:655] Throttling request took 1.048253234s, request: GET:https://192.168.103.2:8443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
	W0923 11:59:42.606140       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:00:01.207075       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:00:14.253050       1 request.go:655] Throttling request took 1.047791922s, request: GET:https://192.168.103.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0923 12:00:15.104736       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:00:31.705087       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:00:46.751914       1 request.go:655] Throttling request took 1.043374203s, request: GET:https://192.168.103.2:8443/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s
	W0923 12:00:47.603798       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:01:02.207459       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:01:19.250210       1 request.go:655] Throttling request took 1.047510334s, request: GET:https://192.168.103.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0923 12:01:20.103195       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0923 12:01:32.707156       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0923 12:01:51.751248       1 request.go:655] Throttling request took 1.047821224s, request: GET:https://192.168.103.2:8443/apis/discovery.k8s.io/v1beta1?timeout=32s
	W0923 12:01:52.603569       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [f1a54f9ee3db] <==
	I0923 11:52:53.920335       1 shared_informer.go:247] Caches are synced for taint 
	I0923 11:52:53.920743       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0923 11:52:53.920810       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-656000. Assuming now as a timestamp.
	I0923 11:52:53.920855       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0923 11:52:53.921535       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0923 11:52:53.921666       1 event.go:291] "Event occurred" object="old-k8s-version-656000" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-656000 event: Registered Node old-k8s-version-656000 in Controller"
	I0923 11:52:53.921688       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mk6ch"
	I0923 11:52:53.921808       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0923 11:52:53.920368       1 range_allocator.go:373] Set node old-k8s-version-656000 PodCIDR to [10.244.0.0/24]
	I0923 11:52:54.002471       1 shared_informer.go:247] Caches are synced for resource quota 
	I0923 11:52:54.097486       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0923 11:52:54.097996       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0923 11:52:54.098104       1 shared_informer.go:247] Caches are synced for resource quota 
	E0923 11:52:54.098280       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0923 11:52:54.103821       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-fvz5d"
	I0923 11:52:54.210645       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0923 11:52:54.319439       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lb25b"
	I0923 11:52:54.511052       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 11:52:54.606723       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0923 11:52:54.606778       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0923 11:52:58.429271       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0923 11:52:58.600225       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lb25b"
	I0923 11:54:57.008266       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0923 11:54:57.204272       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0923 11:54:58.093131       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-5pvv2"
	
	
	==> kube-proxy [2e4fea7d2041] <==
	I0923 11:52:59.651416       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0923 11:52:59.651564       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0923 11:52:59.898241       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 11:52:59.898723       1 server_others.go:185] Using iptables Proxier.
	I0923 11:52:59.899628       1 server.go:650] Version: v1.20.0
	I0923 11:52:59.901160       1 config.go:315] Starting service config controller
	I0923 11:52:59.901308       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 11:52:59.901362       1 config.go:224] Starting endpoint slice config controller
	I0923 11:52:59.901378       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 11:53:00.002583       1 shared_informer.go:247] Caches are synced for service config 
	I0923 11:53:00.002643       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [86d33ed9ed44] <==
	I0923 11:56:11.001463       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0923 11:56:11.001610       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0923 11:56:11.168467       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0923 11:56:11.168717       1 server_others.go:185] Using iptables Proxier.
	I0923 11:56:11.169330       1 server.go:650] Version: v1.20.0
	I0923 11:56:11.170745       1 config.go:315] Starting service config controller
	I0923 11:56:11.170863       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0923 11:56:11.171723       1 config.go:224] Starting endpoint slice config controller
	I0923 11:56:11.171737       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0923 11:56:11.271458       1 shared_informer.go:247] Caches are synced for service config 
	I0923 11:56:11.272475       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [8dfd1e04a342] <==
	I0923 11:55:56.973886       1 serving.go:331] Generated self-signed cert in-memory
	W0923 11:56:04.473171       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0923 11:56:04.473215       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0923 11:56:04.473411       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0923 11:56:04.474436       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0923 11:56:04.981146       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:56:04.981354       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0923 11:56:04.981365       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0923 11:56:04.981438       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0923 11:56:05.366704       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9dfaf3fd956f] <==
	I0923 11:52:32.029106       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0923 11:52:32.103334       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:52:32.103381       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:52:32.103758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:52:32.104818       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:52:32.104997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:52:32.105754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:52:32.106014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:52:32.108738       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:52:32.109925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:52:32.110494       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:52:32.110734       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:52:32.921062       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 11:52:32.930229       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:52:32.982933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 11:52:33.006210       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:52:33.099522       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 11:52:33.118272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:52:33.119978       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0923 11:52:33.157970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:52:33.381304       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:52:33.517968       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:52:33.621929       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:52:33.631761       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0923 11:52:35.809380       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 23 11:59:54 old-k8s-version-656000 kubelet[1893]: E0923 11:59:54.879656    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:01 old-k8s-version-656000 kubelet[1893]: E0923 12:00:01.880164    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:07 old-k8s-version-656000 kubelet[1893]: E0923 12:00:07.877117    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:13 old-k8s-version-656000 kubelet[1893]: E0923 12:00:13.875328    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:20 old-k8s-version-656000 kubelet[1893]: E0923 12:00:20.875342    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:27 old-k8s-version-656000 kubelet[1893]: E0923 12:00:27.875407    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:31 old-k8s-version-656000 kubelet[1893]: E0923 12:00:31.875521    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:39 old-k8s-version-656000 kubelet[1893]: E0923 12:00:39.871312    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:44 old-k8s-version-656000 kubelet[1893]: E0923 12:00:44.893061    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:48 old-k8s-version-656000 kubelet[1893]: W0923 12:00:48.076954    1893 sysinfo.go:203] Nodes topology is not available, providing CPU topology
	Sep 23 12:00:48 old-k8s-version-656000 kubelet[1893]: W0923 12:00:48.079423    1893 sysfs.go:348] unable to read /sys/devices/system/cpu/cpu0/online: open /sys/devices/system/cpu/cpu0/online: no such file or directory
	Sep 23 12:00:50 old-k8s-version-656000 kubelet[1893]: E0923 12:00:50.872230    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:00:58 old-k8s-version-656000 kubelet[1893]: E0923 12:00:58.872565    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:01 old-k8s-version-656000 kubelet[1893]: E0923 12:01:01.876882    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:10 old-k8s-version-656000 kubelet[1893]: E0923 12:01:10.868254    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:12 old-k8s-version-656000 kubelet[1893]: E0923 12:01:12.868011    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:23 old-k8s-version-656000 kubelet[1893]: E0923 12:01:23.867467    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:25 old-k8s-version-656000 kubelet[1893]: E0923 12:01:25.869111    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:36 old-k8s-version-656000 kubelet[1893]: E0923 12:01:36.868788    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:37 old-k8s-version-656000 kubelet[1893]: E0923 12:01:37.867963    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 23 12:01:48 old-k8s-version-656000 kubelet[1893]: E0923 12:01:48.933248    1893 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:01:48 old-k8s-version-656000 kubelet[1893]: E0923 12:01:48.933472    1893 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:01:48 old-k8s-version-656000 kubelet[1893]: E0923 12:01:48.933746    1893 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-8xlzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5pvv2_kube-system(4f374c
79-db93-4ed0-9f66-77ca94f03dcf): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host
	Sep 23 12:01:48 old-k8s-version-656000 kubelet[1893]: E0923 12:01:48.933785    1893 pod_workers.go:191] Error syncing pod 4f374c79-db93-4ed0-9f66-77ca94f03dcf ("metrics-server-9975d5f86-5pvv2_kube-system(4f374c79-db93-4ed0-9f66-77ca94f03dcf)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.65.254:53: no such host"
	Sep 23 12:01:51 old-k8s-version-656000 kubelet[1893]: E0923 12:01:51.879237    1893 pod_workers.go:191] Error syncing pod d36f228f-48a3-4490-b597-31b418142e19 ("dashboard-metrics-scraper-8d5bb5db8-hfqtz_kubernetes-dashboard(d36f228f-48a3-4490-b597-31b418142e19)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [2b7dc21ea030] <==
	2024/09/23 11:56:56 Starting overwatch
	2024/09/23 11:56:56 Using namespace: kubernetes-dashboard
	2024/09/23 11:56:56 Using in-cluster config to connect to apiserver
	2024/09/23 11:56:56 Using secret token for csrf signing
	2024/09/23 11:56:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/23 11:56:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/23 11:56:56 Successful initial request to the apiserver, version: v1.20.0
	2024/09/23 11:56:56 Generating JWE encryption key
	2024/09/23 11:56:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/23 11:56:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/23 11:56:57 Initializing JWE encryption key from synchronized object
	2024/09/23 11:56:57 Creating in-cluster Sidecar client
	2024/09/23 11:56:57 Serving insecurely on HTTP port: 9090
	2024/09/23 11:56:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:57:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:57:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:58:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:58:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:59:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 11:59:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:00:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:00:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:01:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/23 12:01:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [434662eca49c] <==
	I0923 11:56:11.120902       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0923 11:56:32.211011       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f26f0d83255a] <==
	I0923 11:56:50.200608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:56:50.297200       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:56:50.297320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:57:07.849576       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5368ea31-e36d-4079-aa6a-b5b5c05259da", APIVersion:"v1", ResourceVersion:"799", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-656000_26e865b3-3b51-4e8d-b33c-0baaa581b77f became leader
	I0923 11:57:07.854647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:57:07.855819       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-656000_26e865b3-3b51-4e8d-b33c-0baaa581b77f!
	I0923 11:57:07.956366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-656000_26e865b3-3b51-4e8d-b33c-0baaa581b77f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-656000 -n old-k8s-version-656000
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-656000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5pvv2 dashboard-metrics-scraper-8d5bb5db8-hfqtz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-656000 describe pod metrics-server-9975d5f86-5pvv2 dashboard-metrics-scraper-8d5bb5db8-hfqtz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-656000 describe pod metrics-server-9975d5f86-5pvv2 dashboard-metrics-scraper-8d5bb5db8-hfqtz: exit status 1 (384.4352ms)

                                                
                                                
** stderr ** 
	E0923 12:02:01.348236    9284 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:02:01.460916    9284 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:02:01.477882    9284 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	E0923 12:02:01.489894    9284 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
	Error from server (NotFound): pods "metrics-server-9975d5f86-5pvv2" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-hfqtz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-656000 describe pod metrics-server-9975d5f86-5pvv2 dashboard-metrics-scraper-8d5bb5db8-hfqtz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (410.37s)

                                                
                                    

Test pass (311/339)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.56
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.28
9 TestDownloadOnly/v1.20.0/DeleteAll 1.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.89
12 TestDownloadOnly/v1.31.1/json-events 6.88
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.24
18 TestDownloadOnly/v1.31.1/DeleteAll 1.25
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.92
20 TestDownloadOnlyKic 3.32
21 TestBinaryMirror 2.9
22 TestOffline 131.17
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.38
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.38
27 TestAddons/Setup 515.61
29 TestAddons/serial/Volcano 56.12
31 TestAddons/serial/GCPAuth/Namespaces 0.34
35 TestAddons/parallel/InspektorGadget 12.58
36 TestAddons/parallel/MetricsServer 7.32
38 TestAddons/parallel/CSI 65.18
39 TestAddons/parallel/Headlamp 32.4
40 TestAddons/parallel/CloudSpanner 7.13
41 TestAddons/parallel/LocalPath 69.17
42 TestAddons/parallel/NvidiaDevicePlugin 7.83
43 TestAddons/parallel/Yakd 13.52
44 TestAddons/StoppedEnableDisable 13.4
45 TestCertOptions 89.19
46 TestCertExpiration 309.7
47 TestDockerFlags 85.39
48 TestForceSystemdFlag 80.5
49 TestForceSystemdEnv 104.24
56 TestErrorSpam/start 3.74
57 TestErrorSpam/status 2.71
58 TestErrorSpam/pause 3.39
59 TestErrorSpam/unpause 3.26
60 TestErrorSpam/stop 13.9
63 TestFunctional/serial/CopySyncFile 0.04
64 TestFunctional/serial/StartWithProxy 94.25
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 43.64
67 TestFunctional/serial/KubeContext 0.13
68 TestFunctional/serial/KubectlGetPods 0.23
71 TestFunctional/serial/CacheCmd/cache/add_remote 6.2
72 TestFunctional/serial/CacheCmd/cache/add_local 3.43
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
74 TestFunctional/serial/CacheCmd/cache/list 0.26
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.79
76 TestFunctional/serial/CacheCmd/cache/cache_reload 3.89
77 TestFunctional/serial/CacheCmd/cache/delete 0.49
78 TestFunctional/serial/MinikubeKubectlCmd 0.52
80 TestFunctional/serial/ExtraConfig 48.43
81 TestFunctional/serial/ComponentHealth 0.18
82 TestFunctional/serial/LogsCmd 2.31
83 TestFunctional/serial/LogsFileCmd 2.4
84 TestFunctional/serial/InvalidService 5.59
86 TestFunctional/parallel/ConfigCmd 1.84
88 TestFunctional/parallel/DryRun 2.66
89 TestFunctional/parallel/InternationalLanguage 1.01
90 TestFunctional/parallel/StatusCmd 2.86
95 TestFunctional/parallel/AddonsCmd 0.71
96 TestFunctional/parallel/PersistentVolumeClaim 100.62
98 TestFunctional/parallel/SSHCmd 1.79
99 TestFunctional/parallel/CpCmd 5.01
100 TestFunctional/parallel/MySQL 72.99
101 TestFunctional/parallel/FileSync 0.77
102 TestFunctional/parallel/CertSync 4.62
106 TestFunctional/parallel/NodeLabels 0.23
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.92
110 TestFunctional/parallel/License 3.64
111 TestFunctional/parallel/ProfileCmd/profile_not_create 1.61
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.11
114 TestFunctional/parallel/ProfileCmd/profile_list 1.53
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 24.64
118 TestFunctional/parallel/ProfileCmd/profile_json_output 1.53
119 TestFunctional/parallel/ServiceCmd/DeployApp 23.46
120 TestFunctional/parallel/Version/short 0.24
121 TestFunctional/parallel/Version/components 1.51
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.65
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.6
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.68
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.62
126 TestFunctional/parallel/ImageCommands/ImageBuild 10.16
127 TestFunctional/parallel/ImageCommands/Setup 2.03
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.18
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.54
135 TestFunctional/parallel/ServiceCmd/List 1.39
136 TestFunctional/parallel/ServiceCmd/JSONOutput 1.29
137 TestFunctional/parallel/ServiceCmd/HTTPS 15.01
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.03
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.15
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.38
141 TestFunctional/parallel/DockerEnv/powershell 7.33
142 TestFunctional/parallel/ImageCommands/ImageRemove 1.36
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.9
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.2
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.44
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.46
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.4
148 TestFunctional/parallel/ServiceCmd/Format 15.01
149 TestFunctional/parallel/ServiceCmd/URL 15.01
150 TestFunctional/delete_echo-server_images 0.2
151 TestFunctional/delete_my-image_image 0.09
152 TestFunctional/delete_minikube_cached_images 0.08
156 TestMultiControlPlane/serial/StartCluster 205.07
157 TestMultiControlPlane/serial/DeployApp 26.43
158 TestMultiControlPlane/serial/PingHostFromPods 3.61
159 TestMultiControlPlane/serial/AddWorkerNode 53.17
160 TestMultiControlPlane/serial/NodeLabels 0.18
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 2.94
162 TestMultiControlPlane/serial/CopyFile 45.62
163 TestMultiControlPlane/serial/StopSecondaryNode 13.9
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 2.14
165 TestMultiControlPlane/serial/RestartSecondaryNode 151.22
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.87
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 210.61
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.72
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.18
170 TestMultiControlPlane/serial/StopCluster 36.8
171 TestMultiControlPlane/serial/RestartCluster 155.08
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 2.15
173 TestMultiControlPlane/serial/AddSecondaryNode 71.73
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 3.02
177 TestImageBuild/serial/Setup 61.13
178 TestImageBuild/serial/NormalBuild 5.34
179 TestImageBuild/serial/BuildWithBuildArg 2.28
180 TestImageBuild/serial/BuildWithDockerIgnore 1.51
181 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.69
185 TestJSONOutput/start/Command 95.93
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 1.37
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 1.19
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 12.36
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.88
210 TestKicCustomNetwork/create_custom_network 70.17
211 TestKicCustomNetwork/use_default_bridge_network 68.69
212 TestKicExistingNetwork 70.44
213 TestKicCustomSubnet 69.41
214 TestKicStaticIP 71.79
215 TestMainNoArgs 0.24
216 TestMinikubeProfile 135.75
219 TestMountStart/serial/StartWithMountFirst 17.96
220 TestMountStart/serial/VerifyMountFirst 0.73
221 TestMountStart/serial/StartWithMountSecond 16.79
222 TestMountStart/serial/VerifyMountSecond 0.72
223 TestMountStart/serial/DeleteFirst 2.8
224 TestMountStart/serial/VerifyMountPostDelete 0.74
225 TestMountStart/serial/Stop 1.97
226 TestMountStart/serial/RestartStopped 12.05
227 TestMountStart/serial/VerifyMountPostStop 0.71
230 TestMultiNode/serial/FreshStart2Nodes 144.51
231 TestMultiNode/serial/DeployApp2Nodes 42.84
232 TestMultiNode/serial/PingHostFrom2Pods 2.44
233 TestMultiNode/serial/AddNode 48.07
234 TestMultiNode/serial/MultiNodeLabels 0.18
235 TestMultiNode/serial/ProfileList 1.91
236 TestMultiNode/serial/CopyFile 25.87
237 TestMultiNode/serial/StopNode 4.75
238 TestMultiNode/serial/StartAfterStop 17.81
239 TestMultiNode/serial/RestartKeepsNodes 116.82
240 TestMultiNode/serial/DeleteNode 9.68
241 TestMultiNode/serial/StopMultiNode 24.26
242 TestMultiNode/serial/RestartMultiNode 60.02
243 TestMultiNode/serial/ValidateNameConflict 65.06
247 TestPreload 152.05
248 TestScheduledStopWindows 131.26
252 TestInsufficientStorage 41.45
253 TestRunningBinaryUpgrade 208.4
255 TestKubernetesUpgrade 297.02
256 TestMissingContainerUpgrade 242.65
258 TestStoppedBinaryUpgrade/Setup 1.05
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.29
271 TestNoKubernetes/serial/StartWithK8s 92.48
272 TestStoppedBinaryUpgrade/Upgrade 308.79
273 TestNoKubernetes/serial/StartWithStopK8s 25.82
274 TestNoKubernetes/serial/Start 29.75
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.84
276 TestNoKubernetes/serial/ProfileList 4.33
277 TestNoKubernetes/serial/Stop 5.79
278 TestNoKubernetes/serial/StartNoArgs 13.68
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.77
280 TestStoppedBinaryUpgrade/MinikubeLogs 4.33
289 TestPause/serial/Start 114.26
290 TestNetworkPlugins/group/auto/Start 103.08
291 TestNetworkPlugins/group/kindnet/Start 109.66
292 TestPause/serial/SecondStartNoReconfiguration 55.64
293 TestNetworkPlugins/group/calico/Start 169.64
294 TestNetworkPlugins/group/auto/KubeletFlags 1.14
295 TestNetworkPlugins/group/auto/NetCatPod 25.79
296 TestNetworkPlugins/group/auto/DNS 0.41
297 TestNetworkPlugins/group/auto/Localhost 0.32
298 TestNetworkPlugins/group/auto/HairPin 0.31
299 TestPause/serial/Pause 1.42
300 TestPause/serial/VerifyStatus 0.91
301 TestPause/serial/Unpause 1.3
302 TestPause/serial/PauseAgain 1.75
303 TestPause/serial/DeletePaused 4.99
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestPause/serial/VerifyDeletedResources 11.9
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.79
307 TestNetworkPlugins/group/kindnet/NetCatPod 20.55
308 TestNetworkPlugins/group/custom-flannel/Start 106.8
309 TestNetworkPlugins/group/kindnet/DNS 0.45
310 TestNetworkPlugins/group/kindnet/Localhost 0.43
311 TestNetworkPlugins/group/kindnet/HairPin 0.52
312 TestNetworkPlugins/group/false/Start 113.96
313 TestNetworkPlugins/group/enable-default-cni/Start 103.18
314 TestNetworkPlugins/group/calico/ControllerPod 6.02
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.79
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 18.59
317 TestNetworkPlugins/group/calico/KubeletFlags 1.07
318 TestNetworkPlugins/group/calico/NetCatPod 18.94
319 TestNetworkPlugins/group/custom-flannel/DNS 0.41
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.36
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.36
322 TestNetworkPlugins/group/false/KubeletFlags 0.88
323 TestNetworkPlugins/group/false/NetCatPod 18.72
324 TestNetworkPlugins/group/calico/DNS 0.52
325 TestNetworkPlugins/group/calico/Localhost 0.5
326 TestNetworkPlugins/group/calico/HairPin 0.48
327 TestNetworkPlugins/group/false/DNS 0.38
328 TestNetworkPlugins/group/false/Localhost 0.32
329 TestNetworkPlugins/group/false/HairPin 0.33
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.91
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 20.68
332 TestNetworkPlugins/group/flannel/Start 122.85
333 TestNetworkPlugins/group/bridge/Start 116.05
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.39
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.35
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.33
337 TestNetworkPlugins/group/kubenet/Start 110.68
339 TestStartStop/group/old-k8s-version/serial/FirstStart 220.8
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.79
341 TestNetworkPlugins/group/flannel/ControllerPod 6.01
342 TestNetworkPlugins/group/bridge/NetCatPod 18.57
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.9
344 TestNetworkPlugins/group/flannel/NetCatPod 18.7
345 TestNetworkPlugins/group/kubenet/KubeletFlags 0.88
346 TestNetworkPlugins/group/kubenet/NetCatPod 18.67
347 TestNetworkPlugins/group/bridge/DNS 0.39
348 TestNetworkPlugins/group/bridge/Localhost 0.36
349 TestNetworkPlugins/group/bridge/HairPin 0.45
350 TestNetworkPlugins/group/flannel/DNS 0.36
351 TestNetworkPlugins/group/flannel/Localhost 0.36
352 TestNetworkPlugins/group/flannel/HairPin 0.39
353 TestNetworkPlugins/group/kubenet/DNS 0.41
354 TestNetworkPlugins/group/kubenet/Localhost 0.35
355 TestNetworkPlugins/group/kubenet/HairPin 0.35
357 TestStartStop/group/no-preload/serial/FirstStart 136.32
359 TestStartStop/group/embed-certs/serial/FirstStart 119.82
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 113.75
362 TestStartStop/group/old-k8s-version/serial/DeployApp 11.29
363 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.43
364 TestStartStop/group/old-k8s-version/serial/Stop 12.54
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.79
367 TestStartStop/group/embed-certs/serial/DeployApp 13.03
368 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 18.6
369 TestStartStop/group/no-preload/serial/DeployApp 21
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 7.98
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.71
372 TestStartStop/group/embed-certs/serial/Stop 12.57
373 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.64
374 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.31
375 TestStartStop/group/no-preload/serial/Stop 12.63
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.82
377 TestStartStop/group/embed-certs/serial/SecondStart 288.82
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.86
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 290.54
380 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.88
381 TestStartStop/group/no-preload/serial/SecondStart 293.57
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.34
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.42
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.71
387 TestStartStop/group/embed-certs/serial/Pause 7.45
388 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.63
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.58
391 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.49
392 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.77
393 TestStartStop/group/no-preload/serial/Pause 8.6
395 TestStartStop/group/newest-cni/serial/FirstStart 67.61
396 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
397 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.51
398 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.67
399 TestStartStop/group/old-k8s-version/serial/Pause 7.96
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.63
402 TestStartStop/group/newest-cni/serial/Stop 8.79
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.82
404 TestStartStop/group/newest-cni/serial/SecondStart 29.21
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.85
408 TestStartStop/group/newest-cni/serial/Pause 7.37
x
+
TestDownloadOnly/v1.20.0/json-events (9.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-447300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-447300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker: (9.5612576s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:20:59.877246    4316 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 10:20:59.952904    4316 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-447300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-447300: exit status 85 (275.4637ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-447300 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |          |
	|         | -p download-only-447300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=docker                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:20:50
	Running on machine: minikube4
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:20:50.418749     744 out.go:345] Setting OutFile to fd 756 ...
	I0923 10:20:50.488916     744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:20:50.488916     744 out.go:358] Setting ErrFile to fd 760...
	I0923 10:20:50.489909     744 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:20:50.502522     744 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0923 10:20:50.514753     744 out.go:352] Setting JSON to true
	I0923 10:20:50.519765     744 start.go:129] hostinfo: {"hostname":"minikube4","uptime":47413,"bootTime":1727039436,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:20:50.520820     744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:20:50.525871     744 out.go:97] [download-only-447300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:20:50.525871     744 notify.go:220] Checking for updates...
	W0923 10:20:50.525871     744 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0923 10:20:50.527882     744 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:20:50.530853     744 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:20:50.532885     744 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:20:50.535008     744 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 10:20:50.538584     744 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:20:50.539802     744 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:20:50.836406     744 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:20:50.846419     744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:52.020250     744 cli_runner.go:217] Completed: docker system info --format "{{json .}}": (1.1737756s)
	I0923 10:20:52.022043     744 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:20:51.985048111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:20:52.025686     744 out.go:97] Using the docker driver based on user configuration
	I0923 10:20:52.025743     744 start.go:297] selected driver: docker
	I0923 10:20:52.025838     744 start.go:901] validating driver "docker" against <nil>
	I0923 10:20:52.042316     744 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:20:52.365430     744 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:20:52.331483103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:20:52.365430     744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:20:52.472784     744 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0923 10:20:52.473840     744 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:20:52.477396     744 out.go:169] Using Docker Desktop driver with root privileges
	I0923 10:20:52.478731     744 cni.go:84] Creating CNI manager for ""
	I0923 10:20:52.479692     744 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 10:20:52.479852     744 start.go:340] cluster config:
	{Name:download-only-447300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-447300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:20:52.481400     744 out.go:97] Starting "download-only-447300" primary control-plane node in "download-only-447300" cluster
	I0923 10:20:52.481400     744 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:20:52.484146     744 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:20:52.484146     744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:52.484146     744 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:20:52.534088     744 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 10:20:52.534088     744 cache.go:56] Caching tarball of preloaded images
	I0923 10:20:52.534088     744 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 10:20:52.537086     744 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:20:52.537086     744 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:20:52.555124     744 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:52.555124     744 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:20:52.556099     744 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:20:52.556099     744 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:20:52.558407     744 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:20:52.605377     744 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-447300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-447300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1953992s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-447300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-329100 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-329100 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker: (6.8772207s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:21:09.193193    4316 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 10:21:09.193193    4316 preload.go:146] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-329100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-329100: exit status 85 (243.594ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-447300 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:20 UTC |                     |
	|         | -p download-only-447300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-447300        | download-only-447300 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-329100 | minikube4\jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-329100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=docker                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:02
	Running on machine: minikube4
	Binary: Built with gc go1.23.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:02.411493    3304 out.go:345] Setting OutFile to fd 796 ...
	I0923 10:21:02.478472    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:02.478472    3304 out.go:358] Setting ErrFile to fd 840...
	I0923 10:21:02.478472    3304 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:02.499151    3304 out.go:352] Setting JSON to true
	I0923 10:21:02.502114    3304 start.go:129] hostinfo: {"hostname":"minikube4","uptime":47425,"bootTime":1727039436,"procs":194,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:21:02.502114    3304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:21:02.510112    3304 out.go:97] [download-only-329100] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:21:02.510571    3304 notify.go:220] Checking for updates...
	I0923 10:21:02.512585    3304 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:21:02.515148    3304 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:21:02.519247    3304 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:21:02.523621    3304 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0923 10:21:02.528756    3304 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:21:02.529163    3304 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:02.696062    3304 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:21:02.708198    3304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:03.020926    3304 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:21:02.988997936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:21:03.023761    3304 out.go:97] Using the docker driver based on user configuration
	I0923 10:21:03.023761    3304 start.go:297] selected driver: docker
	I0923 10:21:03.023761    3304 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:03.040762    3304 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:03.356310    3304 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:75 SystemTime:2024-09-23 10:21:03.331496887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:21:03.356982    3304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:03.402471    3304 start_flags.go:393] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I0923 10:21:03.403734    3304 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:21:03.679700    3304 out.go:169] Using Docker Desktop driver with root privileges
	I0923 10:21:03.682880    3304 cni.go:84] Creating CNI manager for ""
	I0923 10:21:03.682880    3304 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 10:21:03.682880    3304 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:03.683140    3304 start.go:340] cluster config:
	{Name:download-only-329100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-329100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:03.685643    3304 out.go:97] Starting "download-only-329100" primary control-plane node in "download-only-329100" cluster
	I0923 10:21:03.685715    3304 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 10:21:03.687736    3304 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:03.687736    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:03.687736    3304 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:03.761070    3304 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:03.761070    3304 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:21:03.761070    3304 localpath.go:151] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.45-1726784731-19672@sha256_7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed.tar
	I0923 10:21:03.761070    3304 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:03.761070    3304 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:03.761070    3304 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:03.762129    3304 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:03.763519    3304 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 10:21:03.763562    3304 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:03.763738    3304 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 10:21:03.771665    3304 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 10:21:03.771665    3304 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:21:03.839350    3304 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 10:21:06.999610    3304 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 10:21:07.000690    3304 preload.go:254] verifying checksum of C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-329100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-329100"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2451457s)
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (1.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-329100
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.92s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.32s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-559400 --alsologtostderr --driver=docker
aaa_download_only_test.go:232: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-559400 --alsologtostderr --driver=docker: (1.7215186s)
helpers_test.go:175: Cleaning up "download-docker-559400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-559400
--- PASS: TestDownloadOnlyKic (3.32s)

                                                
                                    
x
+
TestBinaryMirror (2.9s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:21:16.440349    4316 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-982700 --alsologtostderr --binary-mirror http://127.0.0.1:56883 --driver=docker
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-982700 --alsologtostderr --binary-mirror http://127.0.0.1:56883 --driver=docker: (1.880422s)
helpers_test.go:175: Cleaning up "binary-mirror-982700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-982700
--- PASS: TestBinaryMirror (2.90s)

                                                
                                    
x
+
TestOffline (131.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-799800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-799800 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker: (2m4.8313638s)
helpers_test.go:175: Cleaning up "offline-docker-799800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-799800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-799800: (6.3340495s)
--- PASS: TestOffline (131.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.38s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-205800
addons_test.go:975: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-205800: exit status 85 (381.746ms)

                                                
                                                
-- stdout --
	* Profile "addons-205800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-205800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.38s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-205800
addons_test.go:986: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-205800: exit status 85 (378.6063ms)

                                                
                                                
-- stdout --
	* Profile "addons-205800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-205800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.38s)

                                                
                                    
x
+
TestAddons/Setup (515.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-205800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-205800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker --addons=ingress --addons=ingress-dns: (8m35.608589s)
--- PASS: TestAddons/Setup (515.61s)

                                                
                                    
x
+
TestAddons/serial/Volcano (56.12s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 18.998ms
addons_test.go:843: volcano-admission stabilized in 18.998ms
addons_test.go:835: volcano-scheduler stabilized in 18.998ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-k4mmw" [160a149f-ce22-4b2f-ac2e-adabb97fcccf] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0087742s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-rjl66" [b626b93e-7751-4dcb-9174-eea5fdb41af2] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0084884s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-cc5mx" [4939d628-efd9-46c0-8bad-afc3e312cc08] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0085795s
addons_test.go:870: (dbg) Run:  kubectl --context addons-205800 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-205800 create -f testdata\vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-205800 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [fae2c593-72df-40d9-97ba-f03379e8ed28] Pending
helpers_test.go:344: "test-job-nginx-0" [fae2c593-72df-40d9-97ba-f03379e8ed28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [fae2c593-72df-40d9-97ba-f03379e8ed28] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 27.0078229s
addons_test.go:906: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable volcano --alsologtostderr -v=1: (11.222014s)
--- PASS: TestAddons/serial/Volcano (56.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-205800 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-205800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.58s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qrffl" [274df95a-cd8a-4e15-8fcf-f56c51ccffcf] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0087914s
addons_test.go:789: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-205800
addons_test.go:789: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-205800: (6.5724807s)
--- PASS: TestAddons/parallel/InspektorGadget (12.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 7.2521ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-227ml" [ef418e12-9463-459c-addb-8d5515dc9976] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0152798s
addons_test.go:413: (dbg) Run:  kubectl --context addons-205800 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable metrics-server --alsologtostderr -v=1: (2.1240831s)
--- PASS: TestAddons/parallel/MetricsServer (7.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.18s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 10:39:28.131287    4316 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 10:39:28.142364    4316 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:39:28.142364    4316 kapi.go:107] duration metric: took 11.0766ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 11.0766ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-205800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-205800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7a1a4b46-014f-44f2-ae78-7087c32e2f8f] Pending
helpers_test.go:344: "task-pv-pod" [7a1a4b46-014f-44f2-ae78-7087c32e2f8f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7a1a4b46-014f-44f2-ae78-7087c32e2f8f] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.0085217s
addons_test.go:528: (dbg) Run:  kubectl --context addons-205800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-205800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-205800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-205800 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-205800 delete pod task-pv-pod: (2.7650919s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-205800 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-205800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-205800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [58bfe176-21a6-4f5f-91ff-9cbf53bf9df5] Pending
helpers_test.go:344: "task-pv-pod-restore" [58bfe176-21a6-4f5f-91ff-9cbf53bf9df5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [58bfe176-21a6-4f5f-91ff-9cbf53bf9df5] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0079514s
addons_test.go:570: (dbg) Run:  kubectl --context addons-205800 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-205800 delete pod task-pv-pod-restore: (1.4116972s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-205800 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-205800 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (8.0165901s)
addons_test.go:586: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable volumesnapshots --alsologtostderr -v=1: (1.7325136s)
--- PASS: TestAddons/parallel/CSI (65.18s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (32.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-205800 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-205800 --alsologtostderr -v=1: (1.757966s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-b7dt7" [39de5d39-7e30-45dc-a112-b48a529a4420] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-b7dt7" [39de5d39-7e30-45dc-a112-b48a529a4420] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-b7dt7" [39de5d39-7e30-45dc-a112-b48a529a4420] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 24.0112005s
addons_test.go:777: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable headlamp --alsologtostderr -v=1: (6.6284025s)
--- PASS: TestAddons/parallel/Headlamp (32.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-wfklx" [a6b0ffcb-01d4-4913-bdab-b61d23fb8ab9] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0062085s
addons_test.go:808: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-205800
addons_test.go:808: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-205800: (1.1115582s)
--- PASS: TestAddons/parallel/CloudSpanner (7.13s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (69.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-205800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-205800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [da17cd28-d140-4a77-8993-fca3f78374a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [da17cd28-d140-4a77-8993-fca3f78374a4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [da17cd28-d140-4a77-8993-fca3f78374a4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 15.0083059s
addons_test.go:938: (dbg) Run:  kubectl --context addons-205800 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 ssh "cat /opt/local-path-provisioner/pvc-ba41e5d6-ad17-4871-8b82-be93f5551393_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-205800 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-205800 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.7560281s)
--- PASS: TestAddons/parallel/LocalPath (69.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.83s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7gt49" [305cf315-1bc1-4aa8-9a3b-0947e4e7da3c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0071653s
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-205800
addons_test.go:1002: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-205800: (1.810094s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.83s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-248hx" [2f62c69a-da24-430d-bb29-8b0860a1410f] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.041992s
addons_test.go:1014: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-windows-amd64.exe -p addons-205800 addons disable yakd --alsologtostderr -v=1: (7.471745s)
--- PASS: TestAddons/parallel/Yakd (13.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-205800
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-205800: (12.2598249s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-205800
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-205800
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-205800
--- PASS: TestAddons/StoppedEnableDisable (13.40s)

                                                
                                    
x
+
TestCertOptions (89.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-734900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-734900 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (1m21.8998506s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-734900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-734900 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (1.0800437s)
I0923 11:40:56.968033    4316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-734900
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-734900 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-734900 -- "sudo cat /etc/kubernetes/admin.conf": (1.2704498s)
helpers_test.go:175: Cleaning up "cert-options-734900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-734900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-734900: (4.81991s)
--- PASS: TestCertOptions (89.19s)

                                                
                                    
x
+
TestCertExpiration (309.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-673800 --memory=2048 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-673800 --memory=2048 --cert-expiration=3m --driver=docker: (1m15.5933124s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-673800 --memory=2048 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-673800 --memory=2048 --cert-expiration=8760h --driver=docker: (46.7847132s)
helpers_test.go:175: Cleaning up "cert-expiration-673800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-673800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-673800: (7.3210429s)
--- PASS: TestCertExpiration (309.70s)

                                                
                                    
x
+
TestDockerFlags (85.39s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-967900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-967900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m18.4750862s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-967900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-967900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-967900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-967900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-967900: (5.1843019s)
--- PASS: TestDockerFlags (85.39s)

                                                
                                    
x
+
TestForceSystemdFlag (80.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-706900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-706900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker: (1m13.6610769s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-706900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-706900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-706900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-706900: (5.927972s)
--- PASS: TestForceSystemdFlag (80.50s)

                                                
                                    
x
+
TestForceSystemdEnv (104.24s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-947500 --memory=2048 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-947500 --memory=2048 --alsologtostderr -v=5 --driver=docker: (1m35.2667255s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-947500 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-947500 ssh "docker info --format {{.CgroupDriver}}": (1.5669672s)
helpers_test.go:175: Cleaning up "force-systemd-env-947500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-947500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-947500: (7.4071255s)
--- PASS: TestForceSystemdEnv (104.24s)

                                                
                                    
x
+
TestErrorSpam/start (3.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run: (1.2293707s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run: (1.2617578s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 start --dry-run: (1.2500303s)
--- PASS: TestErrorSpam/start (3.74s)

                                                
                                    
x
+
TestErrorSpam/status (2.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 status
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 status
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 status
--- PASS: TestErrorSpam/status (2.71s)

                                                
                                    
x
+
TestErrorSpam/pause (3.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 pause: (1.5840911s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 pause
--- PASS: TestErrorSpam/pause (3.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (3.26s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 unpause: (1.1892791s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 unpause: (1.1467603s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 unpause
--- PASS: TestErrorSpam/unpause (3.26s)

                                                
                                    
x
+
TestErrorSpam/stop (13.9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop: (6.7631092s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop: (3.8969257s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-232800 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-232800 stop: (3.2353562s)
--- PASS: TestErrorSpam/stop (13.90s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4316\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (94.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-734700 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker: (1m34.2363235s)
--- PASS: TestFunctional/serial/StartWithProxy (94.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:43:58.348937    4316 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-734700 --alsologtostderr -v=8: (43.6371769s)
functional_test.go:663: soft start took 43.6390363s for "functional-734700" cluster.
I0923 10:44:41.989531    4316 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (43.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-734700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:3.1: (2.1700125s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:3.3: (2.0113261s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 cache add registry.k8s.io/pause:latest: (2.0173824s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-734700 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3976934381\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-734700 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local3976934381\001: (1.5417796s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache add minikube-local-cache-test:functional-734700
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 cache add minikube-local-cache-test:functional-734700: (1.4947506s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache delete minikube-local-cache-test:functional-734700
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-734700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (788.0897ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cache reload
E0923 10:44:55.399190    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.405982    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.419419    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.440962    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.482823    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.566316    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:55.728090    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:44:56.049731    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 cache reload: (1.5727966s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0923 10:44:56.691242    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 kubectl -- --context functional-734700 get pods
E0923 10:44:57.974329    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 10:45:05.659183    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:45:15.902212    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:45:36.386683    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-734700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.4257859s)
functional_test.go:761: restart took 48.4257859s for "functional-734700" cluster.
I0923 10:45:51.936298    4316 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (48.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-734700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 logs: (2.305191s)
--- PASS: TestFunctional/serial/LogsCmd (2.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4034599459\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4034599459\001\logs.txt: (2.3928591s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-734700 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-734700
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-734700: exit status 115 (1.1752304s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31444 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-734700 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 config get cpus: exit status 14 (299.5255ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 config get cpus: exit status 14 (300.2419ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0833646s)

                                                
                                                
-- stdout --
	* [functional-734700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:46:49.768789    3736 out.go:345] Setting OutFile to fd 1308 ...
	I0923 10:46:49.867791    3736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:49.867791    3736 out.go:358] Setting ErrFile to fd 1228...
	I0923 10:46:49.867791    3736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:49.895797    3736 out.go:352] Setting JSON to false
	I0923 10:46:49.899776    3736 start.go:129] hostinfo: {"hostname":"minikube4","uptime":48972,"bootTime":1727039436,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:46:49.899776    3736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:46:49.905789    3736 out.go:177] * [functional-734700] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:46:49.907781    3736 notify.go:220] Checking for updates...
	I0923 10:46:49.910780    3736 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:46:49.912782    3736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:46:49.920576    3736 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:46:49.923468    3736 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:46:49.926019    3736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:46:49.930174    3736 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:46:49.931029    3736 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:46:50.176013    3736 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:46:50.189019    3736 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:46:50.585191    3736 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:82 SystemTime:2024-09-23 10:46:50.553521627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:46:50.589208    3736 out.go:177] * Using the docker driver based on existing profile
	I0923 10:46:50.591192    3736 start.go:297] selected driver: docker
	I0923 10:46:50.591192    3736 start.go:901] validating driver "docker" against &{Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:46:50.591192    3736 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:46:50.657179    3736 out.go:201] 
	W0923 10:46:50.659293    3736 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:46:50.661198    3736 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --alsologtostderr -v=1 --driver=docker
functional_test.go:991: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --alsologtostderr -v=1 --driver=docker: (1.5745303s)
--- PASS: TestFunctional/parallel/DryRun (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-734700 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (1.0116845s)

                                                
                                                
-- stdout --
	* [functional-734700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:46:02.541357    6824 out.go:345] Setting OutFile to fd 1260 ...
	I0923 10:46:02.640372    6824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:02.641387    6824 out.go:358] Setting ErrFile to fd 1264...
	I0923 10:46:02.641387    6824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:02.685401    6824 out.go:352] Setting JSON to false
	I0923 10:46:02.688390    6824 start.go:129] hostinfo: {"hostname":"minikube4","uptime":48925,"bootTime":1727039436,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4894 Build 19045.4894","kernelVersion":"10.0.19045.4894 Build 19045.4894","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W0923 10:46:02.688390    6824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0923 10:46:02.692378    6824 out.go:177] * [functional-734700] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	I0923 10:46:02.694376    6824 notify.go:220] Checking for updates...
	I0923 10:46:02.696380    6824 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I0923 10:46:02.698377    6824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:46:02.701374    6824 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I0923 10:46:02.706370    6824 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:46:02.708381    6824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:46:02.711392    6824 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:46:02.712366    6824 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:46:02.910143    6824 docker.go:123] docker version: linux-27.2.0:Docker Desktop 4.34.1 (166053)
	I0923 10:46:02.928929    6824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:46:03.291353    6824 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:83 SystemTime:2024-09-23 10:46:03.256619072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:12 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.34] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe Schema
Version:0.1.0 ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.15] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:C:\Program Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https:/
/github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.13.0]] Warnings:<nil>}}
	I0923 10:46:03.295354    6824 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 10:46:03.298353    6824 start.go:297] selected driver: docker
	I0923 10:46:03.298353    6824 start.go:901] validating driver "docker" against &{Name:functional-734700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-734700 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube4:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:46:03.298353    6824 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:46:03.376359    6824 out.go:201] 
	W0923 10:46:03.379366    6824 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:46:03.382364    6824 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 status
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 status -o json
functional_test.go:872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 status -o json: (1.0559194s)
--- PASS: TestFunctional/parallel/StatusCmd (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (100.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7c1fbec1-bfde-4344-83de-2f498ff7c38a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0095842s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-734700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-734700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-734700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-734700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [641db1c7-3aa4-4049-a048-6d81cad29541] Pending
helpers_test.go:344: "sp-pod" [641db1c7-3aa4-4049-a048-6d81cad29541] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0923 10:46:17.351416    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "sp-pod" [641db1c7-3aa4-4049-a048-6d81cad29541] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 40.0114003s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-734700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-734700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-734700 delete -f testdata/storage-provisioner/pod.yaml: (2.0478518s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-734700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [675395bd-509b-4ab0-bac9-f4bf117581ae] Pending
helpers_test.go:344: "sp-pod" [675395bd-509b-4ab0-bac9-f4bf117581ae] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [675395bd-509b-4ab0-bac9-f4bf117581ae] Running
E0923 10:47:39.277986    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 50.0074282s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-734700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (100.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh -n functional-734700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cp functional-734700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd1109186932\001\cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh -n functional-734700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh -n functional-734700 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (72.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-734700 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-lthst" [61983a5b-ade8-4847-b12a-330d5c79419d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-lthst" [61983a5b-ade8-4847-b12a-330d5c79419d] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 57.0082967s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;": exit status 1 (282.496ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:47:44.225714    4316 retry.go:31] will retry after 837.236945ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;": exit status 1 (282.0305ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:47:45.354866    4316 retry.go:31] will retry after 2.085375306s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;": exit status 1 (273.5092ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:47:47.727678    4316 retry.go:31] will retry after 2.724806558s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;": exit status 1 (318.1591ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:47:50.784483    4316 retry.go:31] will retry after 2.322238722s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;": exit status 1 (290.4159ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:47:53.408301    4316 retry.go:31] will retry after 5.74703732s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-734700 exec mysql-6cdb49bbb-lthst -- mysql -ppassword -e "show databases;"
E0923 10:49:55.412576    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:50:23.127925    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (72.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/4316/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /etc/test/nested/copy/4316/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/4316.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /etc/ssl/certs/4316.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/4316.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /usr/share/ca-certificates/4316.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/43162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /etc/ssl/certs/43162.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/43162.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /usr/share/ca-certificates/43162.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-734700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 ssh "sudo systemctl is-active crio": exit status 1 (921.6327ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.6208586s)
--- PASS: TestFunctional/parallel/License (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.261604s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7644: OpenProcess: The parameter is incorrect.
helpers_test.go:502: unable to terminate pid 2148: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.2128812s)
functional_test.go:1315: Took "1.2128812s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "313.0194ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-734700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7755c7ff-4a23-411d-88f1-4eefca619115] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7755c7ff-4a23-411d-88f1-4eefca619115] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 24.0166953s
I0923 10:46:29.191043    4316 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (24.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (1.2960095s)
functional_test.go:1366: Took "1.2960095s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "231.6795ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-734700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-734700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-qzzmh" [2236b652-f6a9-42b7-9fb3-8ea3d13cb72d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-qzzmh" [2236b652-f6a9-42b7-9fb3-8ea3d13cb72d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.0083187s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 version -o=json --components: (1.5130693s)
--- PASS: TestFunctional/parallel/Version/components (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-734700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-734700
docker.io/kicbase/echo-server:functional-734700
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-734700 image ls --format short --alsologtostderr:
I0923 10:47:20.127484    8796 out.go:345] Setting OutFile to fd 1080 ...
I0923 10:47:20.201628    8796 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:20.201628    8796 out.go:358] Setting ErrFile to fd 1716...
I0923 10:47:20.201628    8796 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:20.214628    8796 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:20.215646    8796 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:20.233629    8796 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
I0923 10:47:20.310231    8796 ssh_runner.go:195] Run: systemctl --version
I0923 10:47:20.318230    8796 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
I0923 10:47:20.400526    8796 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
I0923 10:47:20.595115    8796 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-734700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-734700 | c39381429a554 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-734700 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| localhost/my-image                          | functional-734700 | 6f1b9def4f06b | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-734700 image ls --format table --alsologtostderr:
I0923 10:47:32.222959    6668 out.go:345] Setting OutFile to fd 1640 ...
I0923 10:47:32.293959    6668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:32.293959    6668 out.go:358] Setting ErrFile to fd 1480...
I0923 10:47:32.293959    6668 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:32.316957    6668 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:32.316957    6668 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:32.335969    6668 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
I0923 10:47:32.419436    6668 ssh_runner.go:195] Run: systemctl --version
I0923 10:47:32.427430    6668 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
I0923 10:47:32.494453    6668 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
I0923 10:47:32.626085    6668 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-734700 image ls --format json --alsologtostderr:
[{"id":"6f1b9def4f06b3454e80bff7d122a3e4fa814c66b369c0e58db070bf07cf8a2b","repoDigests":[],"repoTags":["localhost/my-image:functional-734700"],"size":"1240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-734700"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9
aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"c39381429a55466c7ea88a9686f94fe0f43aa7f731f728ea47ea5e505bfaa1fa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-734700"],"size":"30"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.
k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-734700 image ls --format json --alsologtostderr:
I0923 10:47:31.562079    3908 out.go:345] Setting OutFile to fd 1764 ...
I0923 10:47:31.635081    3908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:31.635081    3908 out.go:358] Setting ErrFile to fd 1380...
I0923 10:47:31.635081    3908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:31.654092    3908 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:31.655121    3908 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:31.672089    3908 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
I0923 10:47:31.761082    3908 ssh_runner.go:195] Run: systemctl --version
I0923 10:47:31.768091    3908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
I0923 10:47:31.850082    3908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
I0923 10:47:32.012471    3908 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-734700 image ls --format yaml --alsologtostderr:
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: c39381429a55466c7ea88a9686f94fe0f43aa7f731f728ea47ea5e505bfaa1fa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-734700
size: "30"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-734700
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-734700 image ls --format yaml --alsologtostderr:
I0923 10:47:20.781213   11996 out.go:345] Setting OutFile to fd 1564 ...
I0923 10:47:20.855917   11996 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:20.855917   11996 out.go:358] Setting ErrFile to fd 1144...
I0923 10:47:20.855917   11996 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:20.870654   11996 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:20.870654   11996 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:20.890585   11996 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
I0923 10:47:20.980684   11996 ssh_runner.go:195] Run: systemctl --version
I0923 10:47:20.989683   11996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
I0923 10:47:21.059701   11996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
I0923 10:47:21.203561   11996 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 ssh pgrep buildkitd: exit status 1 (740.3632ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image build -t localhost/my-image:functional-734700 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image build -t localhost/my-image:functional-734700 testdata\build --alsologtostderr: (8.7946109s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-734700 image build -t localhost/my-image:functional-734700 testdata\build --alsologtostderr:
I0923 10:47:22.147107   13172 out.go:345] Setting OutFile to fd 1912 ...
I0923 10:47:22.240495   13172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:22.240495   13172 out.go:358] Setting ErrFile to fd 1916...
I0923 10:47:22.240495   13172 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:47:22.254985   13172 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:22.269985   13172 config.go:182] Loaded profile config "functional-734700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 10:47:22.289992   13172 cli_runner.go:164] Run: docker container inspect functional-734700 --format={{.State.Status}}
I0923 10:47:22.367001   13172 ssh_runner.go:195] Run: systemctl --version
I0923 10:47:22.374989   13172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-734700
I0923 10:47:22.447337   13172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57731 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-734700\id_rsa Username:docker}
I0923 10:47:22.573922   13172 build_images.go:161] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.815159834.tar
I0923 10:47:22.591874   13172 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:47:22.622201   13172 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.815159834.tar
I0923 10:47:22.631212   13172 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.815159834.tar: stat -c "%s %y" /var/lib/minikube/build/build.815159834.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.815159834.tar': No such file or directory
I0923 10:47:22.631212   13172 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.815159834.tar --> /var/lib/minikube/build/build.815159834.tar (3072 bytes)
I0923 10:47:22.706123   13172 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.815159834
I0923 10:47:22.744832   13172 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.815159834 -xf /var/lib/minikube/build/build.815159834.tar
I0923 10:47:22.790498   13172 docker.go:360] Building image: /var/lib/minikube/build/build.815159834
I0923 10:47:22.799531   13172 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-734700 /var/lib/minikube/build/build.815159834
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 4.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:6f1b9def4f06b3454e80bff7d122a3e4fa814c66b369c0e58db070bf07cf8a2b
#8 writing image sha256:6f1b9def4f06b3454e80bff7d122a3e4fa814c66b369c0e58db070bf07cf8a2b 0.0s done
#8 naming to localhost/my-image:functional-734700 0.0s done
#8 DONE 0.2s
I0923 10:47:30.695002   13172 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-734700 /var/lib/minikube/build/build.815159834: (7.8950978s)
I0923 10:47:30.711528   13172 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.815159834
I0923 10:47:30.748677   13172 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.815159834.tar
I0923 10:47:30.790623   13172 build_images.go:217] Built localhost/my-image:functional-734700 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.815159834.tar
I0923 10:47:30.790623   13172 build_images.go:133] succeeded building to: functional-734700
I0923 10:47:30.791169   13172 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.8977736s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-734700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-734700 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-734700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 6004: TerminateProcess: Access is denied.
helpers_test.go:508: unable to kill pid 11088: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr: (2.826992s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 service list: (1.3901731s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 service list -o json: (1.2951078s)
functional_test.go:1494: Took "1.2951078s" to run "out/minikube-windows-amd64.exe -p functional-734700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 service --namespace=default --https --url hello-node: exit status 1 (15.0113953s)

                                                
                                                
-- stdout --
	https://127.0.0.1:58030

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:58030
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr: (1.4208181s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-734700
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image load --daemon kicbase/echo-server:functional-734700 --alsologtostderr: (1.5061795s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image save kicbase/echo-server:functional-734700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image save kicbase/echo-server:functional-734700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.3806829s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-734700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-734700"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-734700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-734700": (4.4053872s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-734700 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-734700 docker-env | Invoke-Expression ; docker images": (2.9172342s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image rm kicbase/echo-server:functional-734700 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr: (1.2882935s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-734700
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 image save --daemon kicbase/echo-server:functional-734700 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-734700 image save --daemon kicbase/echo-server:functional-734700 --alsologtostderr: (1.400067s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-734700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 service hello-node --url --format={{.IP}}: exit status 1 (15.0134889s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-734700 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-734700 service hello-node --url: exit status 1 (15.0106301s)

                                                
                                                
-- stdout --
	http://127.0.0.1:58119

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:58119
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-734700
--- PASS: TestFunctional/delete_echo-server_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-734700
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-734700
--- PASS: TestFunctional/delete_minikube_cached_images (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-036200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker
E0923 10:54:55.425783    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-036200 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker: (3m22.9035586s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (2.162487s)
--- PASS: TestMultiControlPlane/serial/StartCluster (205.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (26.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-036200 -- rollout status deployment/busybox: (16.9713617s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- nslookup kubernetes.io: (1.7127419s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- nslookup kubernetes.io: (1.5554704s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- nslookup kubernetes.io: (1.5482263s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (26.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-4sfpm -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-bpshk -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-036200 -- exec busybox-7dff88458-kdpt4 -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (3.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-036200 -v=7 --alsologtostderr
E0923 10:56:04.241774    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.249765    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.262758    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.285755    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.328751    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.411761    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.574389    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:04.896946    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:05.539034    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:06.821150    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:09.384029    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:14.506078    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:56:24.748031    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-036200 -v=7 --alsologtostderr: (50.2736459s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
E0923 10:56:45.231280    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (2.8995392s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-036200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.9401147s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (45.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status --output json -v=7 --alsologtostderr: (2.7540051s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp testdata\cp-test.txt ha-036200:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2271729321\001\cp-test_ha-036200.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200_ha-036200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200_ha-036200-m02.txt: (1.0844529s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test_ha-036200_ha-036200-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200_ha-036200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200_ha-036200-m03.txt: (1.0914614s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test_ha-036200_ha-036200-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200_ha-036200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200_ha-036200-m04.txt: (1.0878062s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test_ha-036200_ha-036200-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp testdata\cp-test.txt ha-036200-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2271729321\001\cp-test_ha-036200-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m02_ha-036200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m02_ha-036200.txt: (1.0357084s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test_ha-036200-m02_ha-036200.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200-m02_ha-036200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200-m02_ha-036200-m03.txt: (1.0813762s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test_ha-036200-m02_ha-036200-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200-m02_ha-036200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m02:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200-m02_ha-036200-m04.txt: (1.0457076s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test_ha-036200-m02_ha-036200-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp testdata\cp-test.txt ha-036200-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2271729321\001\cp-test_ha-036200-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m03_ha-036200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m03_ha-036200.txt: (1.1068427s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test_ha-036200-m03_ha-036200.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200-m03_ha-036200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200-m03_ha-036200-m02.txt: (1.074122s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test_ha-036200-m03_ha-036200-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200-m03_ha-036200-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m03:/home/docker/cp-test.txt ha-036200-m04:/home/docker/cp-test_ha-036200-m03_ha-036200-m04.txt: (1.0887039s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test_ha-036200-m03_ha-036200-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp testdata\cp-test.txt ha-036200-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test.txt"
E0923 10:57:26.196136    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2271729321\001\cp-test_ha-036200-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m04_ha-036200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200:/home/docker/cp-test_ha-036200-m04_ha-036200.txt: (1.0950848s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200 "sudo cat /home/docker/cp-test_ha-036200-m04_ha-036200.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200-m04_ha-036200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200-m02:/home/docker/cp-test_ha-036200-m04_ha-036200-m02.txt: (1.0964367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m02 "sudo cat /home/docker/cp-test_ha-036200-m04_ha-036200-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200-m04_ha-036200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 cp ha-036200-m04:/home/docker/cp-test.txt ha-036200-m03:/home/docker/cp-test_ha-036200-m04_ha-036200-m03.txt: (1.0581518s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 ssh -n ha-036200-m03 "sudo cat /home/docker/cp-test_ha-036200-m04_ha-036200-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (45.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 node stop m02 -v=7 --alsologtostderr: (11.8685822s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: exit status 7 (2.0275368s)

                                                
                                                
-- stdout --
	ha-036200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-036200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-036200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:57:47.409293    7644 out.go:345] Setting OutFile to fd 1824 ...
	I0923 10:57:47.488374    7644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:57:47.488374    7644 out.go:358] Setting ErrFile to fd 2044...
	I0923 10:57:47.488946    7644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:57:47.500960    7644 out.go:352] Setting JSON to false
	I0923 10:57:47.500960    7644 mustload.go:65] Loading cluster: ha-036200
	I0923 10:57:47.500960    7644 notify.go:220] Checking for updates...
	I0923 10:57:47.500960    7644 config.go:182] Loaded profile config "ha-036200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 10:57:47.500960    7644 status.go:174] checking status of ha-036200 ...
	I0923 10:57:47.522677    7644 cli_runner.go:164] Run: docker container inspect ha-036200 --format={{.State.Status}}
	I0923 10:57:47.594095    7644 status.go:364] ha-036200 host status = "Running" (err=<nil>)
	I0923 10:57:47.594095    7644 host.go:66] Checking if "ha-036200" exists ...
	I0923 10:57:47.602088    7644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036200
	I0923 10:57:47.675069    7644 host.go:66] Checking if "ha-036200" exists ...
	I0923 10:57:47.690077    7644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:57:47.698077    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036200
	I0923 10:57:47.763005    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58179 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-036200\id_rsa Username:docker}
	I0923 10:57:47.906808    7644 ssh_runner.go:195] Run: systemctl --version
	I0923 10:57:47.933481    7644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:57:47.972350    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-036200
	I0923 10:57:48.040993    7644 kubeconfig.go:125] found "ha-036200" server: "https://127.0.0.1:58178"
	I0923 10:57:48.040993    7644 api_server.go:166] Checking apiserver status ...
	I0923 10:57:48.050933    7644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:57:48.086895    7644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2551/cgroup
	I0923 10:57:48.113447    7644 api_server.go:182] apiserver freezer: "7:freezer:/docker/38bc858c1a4e6a9b6fddaa840295d14b4da53dfe958c9d979bf7d1c183f45b7f/kubepods/burstable/pod0d84f0239e50691d4e1ac59e40911ad2/f2f677587231bae3c0617f6e5cfefa7e752430779dc053d6bf802f666c2e7c8d"
	I0923 10:57:48.125629    7644 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/38bc858c1a4e6a9b6fddaa840295d14b4da53dfe958c9d979bf7d1c183f45b7f/kubepods/burstable/pod0d84f0239e50691d4e1ac59e40911ad2/f2f677587231bae3c0617f6e5cfefa7e752430779dc053d6bf802f666c2e7c8d/freezer.state
	I0923 10:57:48.147319    7644 api_server.go:204] freezer state: "THAWED"
	I0923 10:57:48.147319    7644 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58178/healthz ...
	I0923 10:57:48.163978    7644 api_server.go:279] https://127.0.0.1:58178/healthz returned 200:
	ok
	I0923 10:57:48.163978    7644 status.go:456] ha-036200 apiserver status = Running (err=<nil>)
	I0923 10:57:48.163978    7644 status.go:176] ha-036200 status: &{Name:ha-036200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:57:48.163978    7644 status.go:174] checking status of ha-036200-m02 ...
	I0923 10:57:48.186889    7644 cli_runner.go:164] Run: docker container inspect ha-036200-m02 --format={{.State.Status}}
	I0923 10:57:48.254383    7644 status.go:364] ha-036200-m02 host status = "Stopped" (err=<nil>)
	I0923 10:57:48.254383    7644 status.go:377] host is not running, skipping remaining checks
	I0923 10:57:48.254383    7644 status.go:176] ha-036200-m02 status: &{Name:ha-036200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:57:48.254383    7644 status.go:174] checking status of ha-036200-m03 ...
	I0923 10:57:48.272401    7644 cli_runner.go:164] Run: docker container inspect ha-036200-m03 --format={{.State.Status}}
	I0923 10:57:48.345523    7644 status.go:364] ha-036200-m03 host status = "Running" (err=<nil>)
	I0923 10:57:48.345523    7644 host.go:66] Checking if "ha-036200-m03" exists ...
	I0923 10:57:48.356630    7644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036200-m03
	I0923 10:57:48.421539    7644 host.go:66] Checking if "ha-036200-m03" exists ...
	I0923 10:57:48.432530    7644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:57:48.440528    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036200-m03
	I0923 10:57:48.511534    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58297 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-036200-m03\id_rsa Username:docker}
	I0923 10:57:48.644623    7644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:57:48.678591    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-036200
	I0923 10:57:48.755303    7644 kubeconfig.go:125] found "ha-036200" server: "https://127.0.0.1:58178"
	I0923 10:57:48.755339    7644 api_server.go:166] Checking apiserver status ...
	I0923 10:57:48.770360    7644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:57:48.805574    7644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2438/cgroup
	I0923 10:57:48.831337    7644 api_server.go:182] apiserver freezer: "7:freezer:/docker/b2e4b50f72f8a2412423ca5d5814a41bdc130f1e3bd6491baaca0812d666eb39/kubepods/burstable/pod980e53cde0f71350f5d8e2cb32942e5f/3f659362b1dd29944744c8f070155d18900d235fb7adc33ba50065ad6a9213a9"
	I0923 10:57:48.842307    7644 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b2e4b50f72f8a2412423ca5d5814a41bdc130f1e3bd6491baaca0812d666eb39/kubepods/burstable/pod980e53cde0f71350f5d8e2cb32942e5f/3f659362b1dd29944744c8f070155d18900d235fb7adc33ba50065ad6a9213a9/freezer.state
	I0923 10:57:48.862857    7644 api_server.go:204] freezer state: "THAWED"
	I0923 10:57:48.862978    7644 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58178/healthz ...
	I0923 10:57:48.876112    7644 api_server.go:279] https://127.0.0.1:58178/healthz returned 200:
	ok
	I0923 10:57:48.876112    7644 status.go:456] ha-036200-m03 apiserver status = Running (err=<nil>)
	I0923 10:57:48.876112    7644 status.go:176] ha-036200-m03 status: &{Name:ha-036200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:57:48.876112    7644 status.go:174] checking status of ha-036200-m04 ...
	I0923 10:57:48.902574    7644 cli_runner.go:164] Run: docker container inspect ha-036200-m04 --format={{.State.Status}}
	I0923 10:57:48.973605    7644 status.go:364] ha-036200-m04 host status = "Running" (err=<nil>)
	I0923 10:57:48.973605    7644 host.go:66] Checking if "ha-036200-m04" exists ...
	I0923 10:57:48.983809    7644 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-036200-m04
	I0923 10:57:49.055061    7644 host.go:66] Checking if "ha-036200-m04" exists ...
	I0923 10:57:49.066088    7644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:57:49.074075    7644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-036200-m04
	I0923 10:57:49.143060    7644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58431 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-036200-m04\id_rsa Username:docker}
	I0923 10:57:49.286053    7644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:57:49.306901    7644 status.go:176] ha-036200-m04 status: &{Name:ha-036200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1398117s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (2.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (151.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 node start m02 -v=7 --alsologtostderr
E0923 10:58:48.122968    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 10:59:55.440631    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 node start m02 -v=7 --alsologtostderr: (2m28.2962434s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (2.7383474s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (151.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8687638s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-036200 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-windows-amd64.exe stop -p ha-036200 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-windows-amd64.exe stop -p ha-036200 -v=7 --alsologtostderr: (37.9560128s)
ha_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-036200 --wait=true -v=7 --alsologtostderr
E0923 11:01:04.256526    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:01:18.521060    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:01:31.972708    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-036200 --wait=true -v=7 --alsologtostderr: (2m52.1905743s)
ha_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe node list -p ha-036200
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (210.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 node delete m03 -v=7 --alsologtostderr: (14.1742748s)
ha_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (1.9994458s)
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1753904s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 stop -v=7 --alsologtostderr: (36.2821351s)
ha_test.go:537: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: exit status 7 (513.3274ms)

                                                
                                                
-- stdout --
	ha-036200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-036200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:04:51.460196   10780 out.go:345] Setting OutFile to fd 1296 ...
	I0923 11:04:51.527796   10780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:04:51.527796   10780 out.go:358] Setting ErrFile to fd 1844...
	I0923 11:04:51.527796   10780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:04:51.542013   10780 out.go:352] Setting JSON to false
	I0923 11:04:51.542089   10780 mustload.go:65] Loading cluster: ha-036200
	I0923 11:04:51.542255   10780 notify.go:220] Checking for updates...
	I0923 11:04:51.542830   10780 config.go:182] Loaded profile config "ha-036200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:04:51.542945   10780 status.go:174] checking status of ha-036200 ...
	I0923 11:04:51.562443   10780 cli_runner.go:164] Run: docker container inspect ha-036200 --format={{.State.Status}}
	I0923 11:04:51.645313   10780 status.go:364] ha-036200 host status = "Stopped" (err=<nil>)
	I0923 11:04:51.645313   10780 status.go:377] host is not running, skipping remaining checks
	I0923 11:04:51.645313   10780 status.go:176] ha-036200 status: &{Name:ha-036200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:04:51.645313   10780 status.go:174] checking status of ha-036200-m02 ...
	I0923 11:04:51.667658   10780 cli_runner.go:164] Run: docker container inspect ha-036200-m02 --format={{.State.Status}}
	I0923 11:04:51.748979   10780 status.go:364] ha-036200-m02 host status = "Stopped" (err=<nil>)
	I0923 11:04:51.748979   10780 status.go:377] host is not running, skipping remaining checks
	I0923 11:04:51.748979   10780 status.go:176] ha-036200-m02 status: &{Name:ha-036200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:04:51.748979   10780 status.go:174] checking status of ha-036200-m04 ...
	I0923 11:04:51.767459   10780 cli_runner.go:164] Run: docker container inspect ha-036200-m04 --format={{.State.Status}}
	I0923 11:04:51.846942   10780 status.go:364] ha-036200-m04 host status = "Stopped" (err=<nil>)
	I0923 11:04:51.846942   10780 status.go:377] host is not running, skipping remaining checks
	I0923 11:04:51.846942   10780 status.go:176] ha-036200-m04 status: &{Name:ha-036200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (155.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-036200 --wait=true -v=7 --alsologtostderr --driver=docker
E0923 11:04:55.454941    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:06:04.271176    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-036200 --wait=true -v=7 --alsologtostderr --driver=docker: (2m32.5468568s)
ha_test.go:566: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (2.1171035s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (155.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.1514981s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (2.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-036200 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-036200 --control-plane -v=7 --alsologtostderr: (1m8.7499934s)
ha_test.go:611: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-windows-amd64.exe -p ha-036200 status -v=7 --alsologtostderr: (2.9749341s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.0201791s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (3.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (61.13s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-329800 --driver=docker
E0923 11:09:55.469459    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-329800 --driver=docker: (1m1.1276675s)
--- PASS: TestImageBuild/serial/Setup (61.13s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.34s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-329800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-329800: (5.3351921s)
--- PASS: TestImageBuild/serial/NormalBuild (5.34s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-329800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-329800: (2.2773146s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.28s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-329800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-329800: (1.4948363s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-329800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-329800: (1.6938333s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (95.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-517400 --output=json --user=testUser --memory=2200 --wait=true --driver=docker
E0923 11:11:04.285120    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-517400 --output=json --user=testUser --memory=2200 --wait=true --driver=docker: (1m35.9282819s)
--- PASS: TestJSONOutput/start/Command (95.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.37s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-517400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-517400 --output=json --user=testUser: (1.3688951s)
--- PASS: TestJSONOutput/pause/Command (1.37s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.19s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-517400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-517400 --output=json --user=testUser: (1.1923598s)
--- PASS: TestJSONOutput/unpause/Command (1.19s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-517400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-517400 --output=json --user=testUser: (12.3584474s)
--- PASS: TestJSONOutput/stop/Command (12.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.88s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-823500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-823500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (258.5853ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b10b27b5-4520-46f6-87f4-c373cdb239de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-823500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"730ab5de-18b6-4fac-8815-3150b1c70c9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"4280bea1-26c9-48c5-b3d3-7fe8f1df30d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"46532997-1574-433f-bba7-f031df273af2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0ce928fc-4ff4-48ee-9f28-acf5cb0a5040","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"6ce1f68c-3527-446e-9fd0-b20d65e51565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"65911ffb-5d3a-49bb-8765-e49bbe27ecf2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-823500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-823500
--- PASS: TestErrorJSONOutput (0.88s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (70.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-404300 --network=
E0923 11:12:27.365857    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-404300 --network=: (1m6.137964s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-404300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-404300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-404300: (3.944408s)
--- PASS: TestKicCustomNetwork/create_custom_network (70.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (68.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-120500 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-120500 --network=bridge: (1m5.5486038s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-120500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-120500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-120500: (3.0572672s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (68.69s)

                                                
                                    
x
+
TestKicExistingNetwork (70.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 11:14:29.720122    4316 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 11:14:29.793802    4316 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 11:14:29.803708    4316 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 11:14:29.803708    4316 cli_runner.go:164] Run: docker network inspect existing-network
W0923 11:14:29.880215    4316 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 11:14:29.880215    4316 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 11:14:29.880215    4316 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 11:14:29.894104    4316 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 11:14:29.991382    4316 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006a5c50}
I0923 11:14:29.991382    4316 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0923 11:14:30.004899    4316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W0923 11:14:30.083850    4316 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W0923 11:14:30.083850    4316 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W0923 11:14:30.083850    4316 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I0923 11:14:30.112553    4316 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0923 11:14:30.130743    4316 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0006cc750}
I0923 11:14:30.131453    4316 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 11:14:30.141394    4316 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 11:14:30.321998    4316 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-998700 --network=existing-network
E0923 11:14:55.482811    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-998700 --network=existing-network: (1m6.2401476s)
helpers_test.go:175: Cleaning up "existing-network-998700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-998700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-998700: (3.4165992s)
I0923 11:15:40.081076    4316 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (70.44s)

                                                
                                    
x
+
TestKicCustomSubnet (69.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-184500 --subnet=192.168.60.0/24
E0923 11:16:04.300815    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-184500 --subnet=192.168.60.0/24: (1m5.9044901s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-184500 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-184500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-184500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-184500: (3.4232371s)
--- PASS: TestKicCustomSubnet (69.41s)

                                                
                                    
x
+
TestKicStaticIP (71.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-567700 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-567700 --static-ip=192.168.200.200: (1m7.1851454s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-567700 ip
helpers_test.go:175: Cleaning up "static-ip-567700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-567700
E0923 11:17:58.570637    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-567700: (4.1431515s)
--- PASS: TestKicStaticIP (71.79s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (135.75s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-734900 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-734900 --driver=docker: (1m1.7523557s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-734900 --driver=docker
E0923 11:19:55.497118    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-734900 --driver=docker: (1m1.7550013s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-734900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.6670182s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-734900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.9529095s)
helpers_test.go:175: Cleaning up "second-734900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-734900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-734900: (4.0846603s)
helpers_test.go:175: Cleaning up "first-734900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-734900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-734900: (3.8760996s)
--- PASS: TestMinikubeProfile (135.75s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (17.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-449800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-449800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (16.9557055s)
--- PASS: TestMountStart/serial/StartWithMountFirst (17.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-449800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (16.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-449800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-449800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (15.7849694s)
--- PASS: TestMountStart/serial/StartWithMountSecond (16.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.72s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-449800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.72s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.8s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-449800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-449800 --alsologtostderr -v=5: (2.8036323s)
--- PASS: TestMountStart/serial/DeleteFirst (2.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.74s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-449800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.74s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.97s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-449800
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-449800: (1.9740701s)
--- PASS: TestMountStart/serial/Stop (1.97s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (12.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-449800
E0923 11:21:04.313216    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-449800: (11.0534571s)
--- PASS: TestMountStart/serial/RestartStopped (12.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-449800 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (144.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker: (2m22.7025895s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: (1.8030578s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (144.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- rollout status deployment/busybox: (35.8902318s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- nslookup kubernetes.io: (1.7888376s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- nslookup kubernetes.io: (1.5355074s)
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-jwg9q -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-390800 -- exec busybox-7dff88458-rdg4h -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-390800 -v 3 --alsologtostderr
E0923 11:24:55.511391    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-390800 -v 3 --alsologtostderr: (46.0119212s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: (2.0582264s)
--- PASS: TestMultiNode/serial/AddNode (48.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-390800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.9062605s)
--- PASS: TestMultiNode/serial/ProfileList (1.91s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (25.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status --output json --alsologtostderr: (1.8360694s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp testdata\cp-test.txt multinode-390800:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2827699872\001\cp-test_multinode-390800.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800:/home/docker/cp-test.txt multinode-390800-m02:/home/docker/cp-test_multinode-390800_multinode-390800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800:/home/docker/cp-test.txt multinode-390800-m02:/home/docker/cp-test_multinode-390800_multinode-390800-m02.txt: (1.0703803s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test_multinode-390800_multinode-390800-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800:/home/docker/cp-test.txt multinode-390800-m03:/home/docker/cp-test_multinode-390800_multinode-390800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800:/home/docker/cp-test.txt multinode-390800-m03:/home/docker/cp-test_multinode-390800_multinode-390800-m03.txt: (1.0504887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test_multinode-390800_multinode-390800-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp testdata\cp-test.txt multinode-390800-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2827699872\001\cp-test_multinode-390800-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m02:/home/docker/cp-test.txt multinode-390800:/home/docker/cp-test_multinode-390800-m02_multinode-390800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m02:/home/docker/cp-test.txt multinode-390800:/home/docker/cp-test_multinode-390800-m02_multinode-390800.txt: (1.0664296s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test_multinode-390800-m02_multinode-390800.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m02:/home/docker/cp-test.txt multinode-390800-m03:/home/docker/cp-test_multinode-390800-m02_multinode-390800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m02:/home/docker/cp-test.txt multinode-390800-m03:/home/docker/cp-test_multinode-390800-m02_multinode-390800-m03.txt: (1.0445227s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test_multinode-390800-m02_multinode-390800-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp testdata\cp-test.txt multinode-390800-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2827699872\001\cp-test_multinode-390800-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m03:/home/docker/cp-test.txt multinode-390800:/home/docker/cp-test_multinode-390800-m03_multinode-390800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m03:/home/docker/cp-test.txt multinode-390800:/home/docker/cp-test_multinode-390800-m03_multinode-390800.txt: (1.0341398s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800 "sudo cat /home/docker/cp-test_multinode-390800-m03_multinode-390800.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m03:/home/docker/cp-test.txt multinode-390800-m02:/home/docker/cp-test_multinode-390800-m03_multinode-390800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 cp multinode-390800-m03:/home/docker/cp-test.txt multinode-390800-m02:/home/docker/cp-test_multinode-390800-m03_multinode-390800-m02.txt: (1.0758598s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 ssh -n multinode-390800-m02 "sudo cat /home/docker/cp-test_multinode-390800-m03_multinode-390800-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (25.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 node stop m03: (1.8934058s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-390800 status: exit status 7 (1.4267952s)

                                                
                                                
-- stdout --
	multinode-390800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-390800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-390800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: exit status 7 (1.4247597s)

                                                
                                                
-- stdout --
	multinode-390800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-390800-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-390800-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:25:44.607316   12152 out.go:345] Setting OutFile to fd 1948 ...
	I0923 11:25:44.681444   12152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:44.681444   12152 out.go:358] Setting ErrFile to fd 1400...
	I0923 11:25:44.681444   12152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:25:44.703281   12152 out.go:352] Setting JSON to false
	I0923 11:25:44.703390   12152 mustload.go:65] Loading cluster: multinode-390800
	I0923 11:25:44.703390   12152 notify.go:220] Checking for updates...
	I0923 11:25:44.704030   12152 config.go:182] Loaded profile config "multinode-390800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:25:44.704030   12152 status.go:174] checking status of multinode-390800 ...
	I0923 11:25:44.722703   12152 cli_runner.go:164] Run: docker container inspect multinode-390800 --format={{.State.Status}}
	I0923 11:25:44.809601   12152 status.go:364] multinode-390800 host status = "Running" (err=<nil>)
	I0923 11:25:44.809775   12152 host.go:66] Checking if "multinode-390800" exists ...
	I0923 11:25:44.824710   12152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-390800
	I0923 11:25:44.899905   12152 host.go:66] Checking if "multinode-390800" exists ...
	I0923 11:25:44.911493   12152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:25:44.919739   12152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-390800
	I0923 11:25:44.989905   12152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59721 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-390800\id_rsa Username:docker}
	I0923 11:25:45.126523   12152 ssh_runner.go:195] Run: systemctl --version
	I0923 11:25:45.149786   12152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:25:45.187853   12152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-390800
	I0923 11:25:45.266252   12152 kubeconfig.go:125] found "multinode-390800" server: "https://127.0.0.1:59720"
	I0923 11:25:45.266252   12152 api_server.go:166] Checking apiserver status ...
	I0923 11:25:45.276252   12152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:25:45.310809   12152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2491/cgroup
	I0923 11:25:45.333888   12152 api_server.go:182] apiserver freezer: "7:freezer:/docker/85a9d1fd077975e3697933f6b7bfc47884b03aee1fd75387e504f100dd72e318/kubepods/burstable/podada48adcb53b93b417f458d22163b13c/7d6cbc286257ac31ef112cc7b9c9feb5d94336493d45585723de5711a617b22e"
	I0923 11:25:45.343888   12152 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/85a9d1fd077975e3697933f6b7bfc47884b03aee1fd75387e504f100dd72e318/kubepods/burstable/podada48adcb53b93b417f458d22163b13c/7d6cbc286257ac31ef112cc7b9c9feb5d94336493d45585723de5711a617b22e/freezer.state
	I0923 11:25:45.366892   12152 api_server.go:204] freezer state: "THAWED"
	I0923 11:25:45.366892   12152 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59720/healthz ...
	I0923 11:25:45.379653   12152 api_server.go:279] https://127.0.0.1:59720/healthz returned 200:
	ok
	I0923 11:25:45.379653   12152 status.go:456] multinode-390800 apiserver status = Running (err=<nil>)
	I0923 11:25:45.379653   12152 status.go:176] multinode-390800 status: &{Name:multinode-390800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:25:45.379653   12152 status.go:174] checking status of multinode-390800-m02 ...
	I0923 11:25:45.398071   12152 cli_runner.go:164] Run: docker container inspect multinode-390800-m02 --format={{.State.Status}}
	I0923 11:25:45.470526   12152 status.go:364] multinode-390800-m02 host status = "Running" (err=<nil>)
	I0923 11:25:45.470526   12152 host.go:66] Checking if "multinode-390800-m02" exists ...
	I0923 11:25:45.481676   12152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-390800-m02
	I0923 11:25:45.546690   12152 host.go:66] Checking if "multinode-390800-m02" exists ...
	I0923 11:25:45.557713   12152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:25:45.565714   12152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-390800-m02
	I0923 11:25:45.639377   12152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59773 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-390800-m02\id_rsa Username:docker}
	I0923 11:25:45.778304   12152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:25:45.802382   12152 status.go:176] multinode-390800-m02 status: &{Name:multinode-390800-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:25:45.802382   12152 status.go:174] checking status of multinode-390800-m03 ...
	I0923 11:25:45.818807   12152 cli_runner.go:164] Run: docker container inspect multinode-390800-m03 --format={{.State.Status}}
	I0923 11:25:45.890604   12152 status.go:364] multinode-390800-m03 host status = "Stopped" (err=<nil>)
	I0923 11:25:45.890604   12152 status.go:377] host is not running, skipping remaining checks
	I0923 11:25:45.891605   12152 status.go:176] multinode-390800-m03 status: &{Name:multinode-390800-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (17.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 node start m03 -v=7 --alsologtostderr: (15.8352439s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status -v=7 --alsologtostderr: (1.8023849s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (17.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-390800
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-390800
E0923 11:26:04.326866    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-390800: (25.0387392s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true -v=8 --alsologtostderr: (1m31.3513024s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-390800
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 node delete m03: (7.8384373s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: (1.3429503s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 stop: (23.4566481s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-390800 status: exit status 7 (403.9822ms)

                                                
                                                
-- stdout --
	multinode-390800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-390800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: exit status 7 (396.5079ms)

                                                
                                                
-- stdout --
	multinode-390800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-390800-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:28:34.205557   14036 out.go:345] Setting OutFile to fd 1120 ...
	I0923 11:28:34.277155   14036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:28:34.277155   14036 out.go:358] Setting ErrFile to fd 2008...
	I0923 11:28:34.277155   14036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:28:34.291819   14036 out.go:352] Setting JSON to false
	I0923 11:28:34.292819   14036 mustload.go:65] Loading cluster: multinode-390800
	I0923 11:28:34.292819   14036 notify.go:220] Checking for updates...
	I0923 11:28:34.292819   14036 config.go:182] Loaded profile config "multinode-390800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:28:34.292819   14036 status.go:174] checking status of multinode-390800 ...
	I0923 11:28:34.315252   14036 cli_runner.go:164] Run: docker container inspect multinode-390800 --format={{.State.Status}}
	I0923 11:28:34.384216   14036 status.go:364] multinode-390800 host status = "Stopped" (err=<nil>)
	I0923 11:28:34.384216   14036 status.go:377] host is not running, skipping remaining checks
	I0923 11:28:34.384762   14036 status.go:176] multinode-390800 status: &{Name:multinode-390800 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:28:34.384762   14036 status.go:174] checking status of multinode-390800-m02 ...
	I0923 11:28:34.403526   14036 cli_runner.go:164] Run: docker container inspect multinode-390800-m02 --format={{.State.Status}}
	I0923 11:28:34.472891   14036 status.go:364] multinode-390800-m02 host status = "Stopped" (err=<nil>)
	I0923 11:28:34.472891   14036 status.go:377] host is not running, skipping remaining checks
	I0923 11:28:34.472891   14036 status.go:176] multinode-390800-m02 status: &{Name:multinode-390800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true -v=8 --alsologtostderr --driver=docker
E0923 11:29:07.415659    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-390800 --wait=true -v=8 --alsologtostderr --driver=docker: (58.1982236s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr
multinode_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-390800 status --alsologtostderr: (1.4018052s)
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.02s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (65.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-390800
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-390800-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-390800-m02 --driver=docker: exit status 14 (264.5451ms)

                                                
                                                
-- stdout --
	* [multinode-390800-m02] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-390800-m02' is duplicated with machine name 'multinode-390800-m02' in profile 'multinode-390800'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-390800-m03 --driver=docker
E0923 11:29:55.525675    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-390800-m03 --driver=docker: (59.7472083s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-390800
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-390800: exit status 80 (778.0395ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-390800 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-390800-m03 already exists in multinode-390800-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_25.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-390800-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-390800-m03: (4.043591s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (65.06s)

                                                
                                    
x
+
TestPreload (152.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-977300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4
E0923 11:31:04.341677    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-977300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.24.4: (1m42.7194559s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-977300 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-977300 image pull gcr.io/k8s-minikube/busybox: (2.1998848s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-977300
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-977300: (12.1293196s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-977300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-977300 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker: (29.6167666s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-977300 image list
helpers_test.go:175: Cleaning up "test-preload-977300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-977300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-977300: (4.3874198s)
--- PASS: TestPreload (152.05s)

                                                
                                    
x
+
TestScheduledStopWindows (131.26s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-380300 --memory=2048 --driver=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-380300 --memory=2048 --driver=docker: (1m2.7704851s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-380300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-380300 --schedule 5m: (1.4139577s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-380300 -n scheduled-stop-380300
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-380300 -n scheduled-stop-380300: (1.0126962s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-380300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-380300 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-380300 --schedule 5s: (1.5899666s)
E0923 11:34:38.620276    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:34:55.539793    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-380300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-380300: exit status 7 (347.4359ms)

                                                
                                                
-- stdout --
	scheduled-stop-380300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-380300 -n scheduled-stop-380300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-380300 -n scheduled-stop-380300: exit status 7 (320.8572ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-380300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-380300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-380300: (2.9266006s)
--- PASS: TestScheduledStopWindows (131.26s)

                                                
                                    
x
+
TestInsufficientStorage (41.45s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-268800 --memory=2048 --output=json --wait=true --driver=docker
E0923 11:36:04.356109    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-268800 --memory=2048 --output=json --wait=true --driver=docker: exit status 26 (36.7154172s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d6cb88c0-9745-48bd-9ff6-f67bb0d01c02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-268800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dbfa7a72-316b-41f1-8620-922816a39f28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"a7936f00-d323-4e33-a845-32c5e456a87e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a60a26c0-cf6e-4cc6-91e3-b4ec8c7b760e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"8b358e56-0a73-46fd-a777-ced047734c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"00e39c76-f905-4139-b721-83f024e11438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9c7df24-dd31-4542-b194-752a5c1b0b3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"79ff8350-07b6-4c9b-9889-d71e6b80a09a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"73886061-ce37-4b11-a2e9-5f876240734d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd830116-e2ec-4817-a333-1692e61f59b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"9069ca28-3721-4133-bb0d-c5ad9dca3bee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-268800\" primary control-plane node in \"insufficient-storage-268800\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"edf29f0f-8b44-4cf1-ab8c-f57e81bcedf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8deef74d-b463-4099-a20a-85a727cbb6fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf51071a-d98a-4f2f-95e2-4f68d2780554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-268800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-268800 --output=json --layout=cluster: exit status 7 (830.1881ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:36:08.827265    9360 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-268800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-268800 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-268800 --output=json --layout=cluster: exit status 7 (799.7187ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-268800","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-268800","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:36:09.632882   13340 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-268800" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E0923 11:36:09.667007   13340 status.go:258] unable to read event log: stat: CreateFile C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-268800\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-268800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-268800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-268800: (3.1080679s)
--- PASS: TestInsufficientStorage (41.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (208.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.626336435.exe start -p running-upgrade-910800 --memory=2200 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.626336435.exe start -p running-upgrade-910800 --memory=2200 --vm-driver=docker: (1m42.1746204s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-910800 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-910800 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m41.09286s)
helpers_test.go:175: Cleaning up "running-upgrade-910800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-910800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-910800: (3.8128886s)
--- PASS: TestRunningBinaryUpgrade (208.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (297.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker
E0923 11:41:04.370178    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker: (1m44.4161403s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-445800
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-445800: (21.6889165s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-445800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-445800 status --format={{.Host}}: exit status 7 (358.5629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (2m13.9672076s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-445800 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker: exit status 106 (323.8907ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-445800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-445800
	    minikube start -p kubernetes-upgrade-445800 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4458002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-445800 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-445800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker: (31.1256476s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-445800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-445800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-445800: (4.9408411s)
--- PASS: TestKubernetesUpgrade (297.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (242.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.498175588.exe start -p missing-upgrade-016000 --memory=2200 --driver=docker
E0923 11:39:55.553927    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.498175588.exe start -p missing-upgrade-016000 --memory=2200 --driver=docker: (1m23.7966656s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-016000
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-016000: (14.8667044s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-016000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-016000 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-016000 --memory=2200 --alsologtostderr -v=1 --driver=docker: (2m17.8778228s)
helpers_test.go:175: Cleaning up "missing-upgrade-016000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-016000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-016000: (5.1985697s)
--- PASS: TestMissingContainerUpgrade (242.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --kubernetes-version=1.20 --driver=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --kubernetes-version=1.20 --driver=docker: exit status 14 (294.6563ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-799800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4894 Build 19045.4894
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (92.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --driver=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --driver=docker: (1m30.9577182s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-799800 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-799800 status -o json: (1.5218293s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (92.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (308.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3487549050.exe start -p stopped-upgrade-337100 --memory=2200 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3487549050.exe start -p stopped-upgrade-337100 --memory=2200 --vm-driver=docker: (3m45.1862814s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3487549050.exe -p stopped-upgrade-337100 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.26.0.3487549050.exe -p stopped-upgrade-337100 stop: (13.3850161s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-337100 --memory=2200 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-337100 --memory=2200 --alsologtostderr -v=1 --driver=docker: (1m10.2213051s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (308.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --driver=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --driver=docker: (19.8980718s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-799800 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-799800 status -o json: exit status 2 (1.2146086s)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-799800","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-799800
no_kubernetes_test.go:124: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-799800: (4.7091831s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --driver=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --no-kubernetes --driver=docker: (29.7522436s)
--- PASS: TestNoKubernetes/serial/Start (29.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-799800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-799800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (843.4082ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-windows-amd64.exe profile list: (2.5848558s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.7449398s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (5.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-799800
no_kubernetes_test.go:158: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-799800: (5.7944815s)
--- PASS: TestNoKubernetes/serial/Stop (5.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (13.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --driver=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-799800 --driver=docker: (13.6761704s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (13.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-799800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-799800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (768.3467ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (4.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-337100
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-337100: (4.3259034s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (4.33s)

                                                
                                    
x
+
TestPause/serial/Start (114.26s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-284400 --memory=2048 --install-addons=false --wait=all --driver=docker
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-284400 --memory=2048 --install-addons=false --wait=all --driver=docker: (1m54.2560272s)
--- PASS: TestPause/serial/Start (114.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E0923 11:44:55.567572    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m43.0812878s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (109.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m49.6604216s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (109.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (55.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-284400 --alsologtostderr -v=1 --driver=docker
E0923 11:45:47.465111    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-284400 --alsologtostderr -v=1 --driver=docker: (55.6245119s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (55.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (169.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (2m49.6438355s)
--- PASS: TestNetworkPlugins/group/calico/Start (169.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (1.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-668100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-668100 "pgrep -a kubelet": (1.1362599s)
I0923 11:46:03.559962    4316 config.go:182] Loaded profile config "auto-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (1.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (25.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gw6jv" [01c13cc6-3423-4667-88aa-a7713ba73e62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:46:04.384383    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gw6jv" [01c13cc6-3423-4667-88aa-a7713ba73e62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 25.008831s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (25.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.31s)

                                                
                                    
x
+
TestPause/serial/Pause (1.42s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-284400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-284400 --alsologtostderr -v=5: (1.4201627s)
--- PASS: TestPause/serial/Pause (1.42s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.91s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-284400 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-284400 --output=json --layout=cluster: exit status 2 (910.8997ms)

                                                
                                                
-- stdout --
	{"Name":"pause-284400","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-284400","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.91s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.3s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-284400 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-284400 --alsologtostderr -v=5: (1.3004906s)
--- PASS: TestPause/serial/Unpause (1.30s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-284400 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-284400 --alsologtostderr -v=5: (1.7530216s)
--- PASS: TestPause/serial/PauseAgain (1.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-284400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-284400 --alsologtostderr -v=5: (4.992176s)
--- PASS: TestPause/serial/DeletePaused (4.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5jlfj" [d2d72116-8076-454c-a8e5-8b590a64407d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0089888s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (11.9s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.5850369s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-284400
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-284400: exit status 1 (87.009ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-284400: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (11.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-668100 "pgrep -a kubelet"
I0923 11:46:56.088783    4316 config.go:182] Loaded profile config "kindnet-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (20.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5j5gh" [4a7c9404-eb36-4d19-ae28-b54b427211e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5j5gh" [4a7c9404-eb36-4d19-ae28-b54b427211e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 20.012829s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (20.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m46.7987908s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (113.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m53.957677s)
--- PASS: TestNetworkPlugins/group/false/Start (113.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (103.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m43.1793978s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (103.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wf88d" [2a20a84e-8b06-4884-8e97-2b1119378809] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.0101901s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-668100 "pgrep -a kubelet"
I0923 11:48:52.677651    4316 config.go:182] Loaded profile config "custom-flannel-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (18.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zjtjx" [6040f7a6-08a6-44fd-afa1-2a814da7ef16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zjtjx" [6040f7a6-08a6-44fd-afa1-2a814da7ef16] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 18.0110605s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (18.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (1.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-668100 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-668100 "pgrep -a kubelet": (1.0719409s)
I0923 11:48:56.936898    4316 config.go:182] Loaded profile config "calico-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (1.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (18.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fnlbz" [603969c0-056e-454a-9d05-1d79ac3a9604] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fnlbz" [603969c0-056e-454a-9d05-1d79ac3a9604] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 18.0168241s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (18.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-668100 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (18.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bgbzz" [54a42ac8-cd2c-4bdb-80c5-4270006a9f04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bgbzz" [54a42ac8-cd2c-4bdb-80c5-4270006a9f04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 18.0062401s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (18.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-668100 "pgrep -a kubelet"
I0923 11:49:53.891290    4316 config.go:182] Loaded profile config "enable-default-cni-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9blw7" [7f568258-5fc9-480d-b06d-315db290e81f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9blw7" [7f568258-5fc9-480d-b06d-315db290e81f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 20.0074985s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (20.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (2m2.8525691s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m56.0488025s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (110.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-668100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m50.6802856s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (110.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (220.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-656000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0
E0923 11:51:04.321768    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.329773    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.341782    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.363401    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.398431    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.406421    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.489388    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.651868    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:04.974294    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:05.616953    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:06.900585    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:09.463122    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:14.585585    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:18.670120    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:24.828986    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:45.311955    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.300578    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.308568    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.321604    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.344585    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.387578    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.470584    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.633598    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:49.955880    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:50.598963    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:51.881795    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:54.444795    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:51:59.567092    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-656000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.20.0: (3m40.7988339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (220.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-668100 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l7gbp" [def57f68-5dc8-418d-86b0-7e364788b029] Running
I0923 11:52:06.575131    4316 config.go:182] Loaded profile config "bridge-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0096735s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (18.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7jqfr" [4d512897-843b-420a-b123-a3d5461ddca2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:52:09.810337    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7jqfr" [4d512897-843b-420a-b123-a3d5461ddca2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 18.0078992s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (18.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-668100 "pgrep -a kubelet"
I0923 11:52:13.105895    4316 config.go:182] Loaded profile config "flannel-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (18.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wwc7g" [5acfaeb6-d9dd-4344-991b-3f6acfbe0a21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wwc7g" [5acfaeb6-d9dd-4344-991b-3f6acfbe0a21] Running
E0923 11:52:30.294468    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 18.0151272s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (18.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-668100 "pgrep -a kubelet"
I0923 11:52:23.066966    4316 config.go:182] Loaded profile config "kubenet-668100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (18.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-668100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pmbm6" [b6d531a4-63c2-47e7-baa0-b8555554df48] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pmbm6" [b6d531a4-63c2-47e7-baa0-b8555554df48] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 18.0089884s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (18.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0923 11:52:26.275926    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-668100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-668100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (136.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-826900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-826900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (2m16.3148078s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (136.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (119.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-618200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-618200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (1m59.8176237s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (119.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (113.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-581000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
E0923 11:53:48.202804    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:49.869158    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:49.877333    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:49.889441    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:49.911640    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:49.955108    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:50.039467    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:50.202151    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:50.524524    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:51.166621    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:52.449300    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.219367    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.225938    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.237900    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.260114    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.302163    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.383860    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.546112    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:53.868855    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:54.511548    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:55.012573    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:55.795193    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:53:58.356984    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:00.134760    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:03.479924    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:10.378490    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.679489    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.687484    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.700484    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.722496    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.722496    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.764107    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:13.847099    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:14.009735    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:14.332027    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:14.974197    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:16.260034    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:18.822267    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:23.944825    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:30.861641    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:33.185345    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:34.188189    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:34.205613    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-581000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (1m53.749159s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (113.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-656000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c25af2b3-8836-4e0f-9d20-f865d81a0582] Pending
helpers_test.go:344: "busybox" [c25af2b3-8836-4e0f-9d20-f865d81a0582] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c25af2b3-8836-4e0f-9d20-f865d81a0582] Running
E0923 11:54:54.552320    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.559307    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.572312    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.595335    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.638330    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.671712    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.720772    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:54.883576    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0091294s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-656000 exec busybox -- /bin/sh -c "ulimit -n"
E0923 11:54:55.206093    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-656000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0923 11:54:55.596546    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:55.849071    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:54:57.131543    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-656000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.1008644s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-656000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-656000 --alsologtostderr -v=3
E0923 11:54:59.694363    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:55:04.818096    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-656000 --alsologtostderr -v=3: (12.5416088s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-656000 -n old-k8s-version-656000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-656000 -n old-k8s-version-656000: exit status 7 (315.7432ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-656000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-618200 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fbb60272-3875-4992-890c-cbbbf5a56493] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fbb60272-3875-4992-890c-cbbbf5a56493] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.1340249s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-618200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (18.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-581000 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4570265c-7a5a-433a-a66a-8f828d7e5e89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4570265c-7a5a-433a-a66a-8f828d7e5e89] Running
E0923 11:55:35.543097    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:55:35.636368    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.0065531s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-581000 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:196: (dbg) Done: kubectl --context default-k8s-diff-port-581000 exec busybox -- /bin/sh -c "ulimit -n": (6.0304494s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (18.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-826900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3eb98a6-974b-4fdc-b04b-df1ef6d8d620] Pending
helpers_test.go:344: "busybox" [c3eb98a6-974b-4fdc-b04b-df1ef6d8d620] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3eb98a6-974b-4fdc-b04b-df1ef6d8d620] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 20.0249845s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-826900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (21.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-618200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-618200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (7.6551627s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-618200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (7.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-581000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-581000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.3849311s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-581000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-618200 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-618200 --alsologtostderr -v=3: (12.5698826s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-581000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-581000 --alsologtostderr -v=3: (12.6397748s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-826900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-826900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.9892428s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-826900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-826900 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-826900 --alsologtostderr -v=3: (12.6342748s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-618200 -n embed-certs-618200
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-618200 -n embed-certs-618200: exit status 7 (332.477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-618200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-618200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-618200 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.31.1: (4m47.808035s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-618200 -n embed-certs-618200
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-618200 -n embed-certs-618200: (1.0053369s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000: exit status 7 (368.066ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-581000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-581000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1
E0923 11:56:04.335680    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:04.412430    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-581000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.31.1: (4m49.5037898s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000: (1.039072s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-826900 -n no-preload-826900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-826900 -n no-preload-826900: exit status 7 (372.9982ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-826900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (293.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-826900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1
E0923 11:56:16.509222    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:32.052539    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:33.753131    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:37.096794    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:49.314621    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:56:57.562564    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.214186    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.221201    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.234175    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.257233    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.300307    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.382990    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.545139    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:06.867425    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.129426    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.137404    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.150407    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.173417    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.215935    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.298621    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.461566    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.509953    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:07.783479    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:08.425197    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:08.792315    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:09.708546    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:11.354456    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:12.270845    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:16.476825    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:17.036742    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:17.393327    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.692158    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.698694    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.710145    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.732707    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.776191    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:23.857414    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:24.019660    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:24.341870    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:24.983792    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:26.267010    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:26.720464    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:27.636300    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:28.828655    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:33.951608    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:38.436297    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:44.194206    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:47.203835    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:57:48.119908    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:04.677373    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:28.167603    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:29.084374    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:45.641231    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:49.884397    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:58:53.233313    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:13.694109    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:17.603097    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:20.947596    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:41.412611    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:50.093055    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:51.010479    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:54.566627    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 11:59:55.610740    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-205800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:00:07.567643    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:00:22.288496    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-826900 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.31.1: (4m52.6626197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-826900 -n no-preload-826900
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (293.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w976p" [628cc616-983f-4b1c-ac6f-d1e7a412a3f1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0075165s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mwtc9" [e7865668-45e6-48c9-b145-eaee685916b8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0077134s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w976p" [628cc616-983f-4b1c-ac6f-d1e7a412a3f1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0094313s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-618200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mwtc9" [e7865668-45e6-48c9-b145-eaee685916b8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0136716s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-581000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-618200 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-618200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-618200 --alsologtostderr -v=1: (1.472245s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-618200 -n embed-certs-618200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-618200 -n embed-certs-618200: exit status 2 (926.221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-618200 -n embed-certs-618200
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-618200 -n embed-certs-618200: exit status 2 (932.0479ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-618200 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-618200 --alsologtostderr -v=1: (1.4624959s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-618200 -n embed-certs-618200
E0923 12:01:04.349494    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-618200 -n embed-certs-618200: (1.5454777s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-618200 -n embed-certs-618200
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-618200 -n embed-certs-618200: (1.110802s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ns8lq" [5f6ac15a-bf23-425e-b280-ddb3951c8e1c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0091201s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-581000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-581000 --alsologtostderr -v=1: (1.5392656s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
E0923 12:01:04.426486    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000: exit status 2 (1.0095212s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000: exit status 2 (961.7317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-581000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-581000 --alsologtostderr -v=1: (1.4406083s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000: (1.6831632s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-581000 -n default-k8s-diff-port-581000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ns8lq" [5f6ac15a-bf23-425e-b280-ddb3951c8e1c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0350175s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-826900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-826900 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-826900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-826900 --alsologtostderr -v=1: (2.1388032s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-826900 -n no-preload-826900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-826900 -n no-preload-826900: exit status 2 (1.4828036s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-826900 -n no-preload-826900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-826900 -n no-preload-826900: exit status 2 (1.0102762s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-826900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-826900 --alsologtostderr -v=1: (1.6468567s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-826900 -n no-preload-826900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-826900 -n no-preload-826900: (1.4221843s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-826900 -n no-preload-826900
--- PASS: TestStartStop/group/no-preload/serial/Pause (8.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (67.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-895600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-895600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (1m7.608747s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (67.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j8m2d" [d5dfc62e-a166-47f0-bb5d-0ef1d8b76c7d] Running
E0923 12:02:06.228827    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:02:07.143381    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0095408s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j8m2d" [d5dfc62e-a166-47f0-bb5d-0ef1d8b76c7d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0093578s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-656000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-656000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-656000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-656000 --alsologtostderr -v=1: (1.5304639s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-656000 -n old-k8s-version-656000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-656000 -n old-k8s-version-656000: exit status 2 (1.0785959s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-656000 -n old-k8s-version-656000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-656000 -n old-k8s-version-656000: exit status 2 (1.0216728s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-656000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-656000 --alsologtostderr -v=1: (1.4788882s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-656000 -n old-k8s-version-656000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-656000 -n old-k8s-version-656000: (1.6683001s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-656000 -n old-k8s-version-656000
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-656000 -n old-k8s-version-656000: (1.1803894s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (7.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-895600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-895600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.6258501s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-895600 --alsologtostderr -v=3
E0923 12:02:27.515244    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-734700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-895600 --alsologtostderr -v=3: (8.7887327s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-895600 -n newest-cni-895600: exit status 7 (322.4219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-895600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-895600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1
E0923 12:02:33.943156    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:02:34.860354    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0923 12:02:51.418430    4316 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-668100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-895600 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.31.1: (28.0000433s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-895600 -n newest-cni-895600: (1.2085627s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-895600 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-895600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-895600 --alsologtostderr -v=1: (1.8685386s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-895600 -n newest-cni-895600: exit status 2 (860.1713ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-895600 -n newest-cni-895600: exit status 2 (877.0216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-895600 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-895600 --alsologtostderr -v=1: (1.3601718s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-895600 -n newest-cni-895600: (1.4067354s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-895600 -n newest-cni-895600
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-895600 -n newest-cni-895600: (1.0016912s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (7.37s)

                                                
                                    

Test skip (24/339)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (17.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-205800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-205800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-205800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fbe8cd4f-d866-4c5a-be13-8e289bb3d2ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fbe8cd4f-d866-4c5a-be13-8e289bb3d2ce] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.0112772s
I0923 10:40:07.984352    4316 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-205800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:280: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (17.19s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-734700 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-734700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8536: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-734700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-734700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8l2w8" [81b19fdf-eede-4bca-b735-649c9831c25a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8l2w8" [81b19fdf-eede-4bca-b735-649c9831c25a] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.006807s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (18.56s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (16.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-668100 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-668100" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-668100

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-668100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-668100"

                                                
                                                
----------------------- debugLogs end: cilium-668100 [took: 15.7706574s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-668100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-668100
--- SKIP: TestNetworkPlugins/group/cilium (16.41s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-511000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-511000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.69s)

                                                
                                    
Copied to clipboard